Kubevela是OAM规范的一个实现,以下基于MAC来快速安装和体验。
一、安装minikube
遵循 minikube的安装指南进行安装
二、安装helm
通过brew安装,尽量安装最新的Helm3版本
brew install helm
% helm version
version.BuildInfo{Version:"v3.6.3", GitCommit:"d506314abfb5d21419df8c7e7e68012379db2354", GitTreeState:"dirty", GoVersion:"go1.16.6"}
三、安装kubevela
1、使用镜像启动minikube
registry-mirror配置对接镜像加速:
minikube start --vm=true --registry-mirror=koqjpu74.mirror.xx.xx --image-repository=registry.xx-hangzhou.xx.xx/google_containers 
😄 Darwin 10.15.1 上的 minikube v1.18.1
  ▪ MINIKUBE_ACTIVE_DOCKERD=minikube
✨ 根据现有的配置文件使用 hyperkit 驱动程序
👍 Starting control plane node minikube in cluster minikube
🔄 Restarting existing hyperkit VM for "minikube" ...
🐳 正在 Docker 20.10.3 中准备 Kubernetes v1.20.2…
🔎 Verifying Kubernetes components...
  ▪ Using image registry.xx-hangzhou.xx.xx/google_containers/k8s-minikube/storage-provisioner:v4 (global image repository)
🌟 Enabled addons: default-storageclass, storage-provisioner
🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
2、启动ingress
minikube addons enable ingress
查看安装的pod是否正常
% kubectl get pod -A                              
NAMESPACE   NAME                    READY  STATUS   RESTARTS  AGE
kube-system  coredns-54d67798b7-v575p          1/1   Running   2     36m
kube-system  etcd-minikube                1/1   Running   2     37m
kube-system  ingress-nginx-admission-create-nlbbn    0/1   Completed  0     4h45m
kube-system  ingress-nginx-admission-patch-xkzbc     0/1   Completed  0     4h45m
kube-system  ingress-nginx-controller-745945f89d-b42bm  1/1   Running   0     4m16s
kube-system  kube-apiserver-minikube           1/1   Running   2     37m
kube-system  kube-controller-manager-minikube      1/1   Running   2     37m
kube-system  kube-proxy-4jmsh              1/1   Running   2     36m
kube-system  kube-scheduler-minikube           1/1   Running   2     37m
kube-system  storage-provisioner             1/1   Running   9     4h48m
3、遇到的镜像问题:
storage-provisioner镜像拉取失败,原因是里面的和minikube使用的名称不一致。
kubectl describe查看pod
Normal  SandboxChanged 5m32s          kubelet Pod sandbox changed, it will be killed and re-created.
 Normal  Pulling     4m (x4 over 5m31s)   kubelet Pulling image "registry.xx-hangzhou.xx.xx/google_containers/k8s-minikube/storage-provisioner:v4"
 Warning Failed     4m (x4 over 5m24s)   kubelet Failed to pull image "registry.xx-hangzhou.xx.xx/google_containers/k8s-minikube/storage-provisioner:v4": rpc error: code = Unknown desc = Error response from daemon: pull access denied for registry.xx-hangzhou.aliyuncs.xx/google_containers/k8s-minikube/storage-provisioner, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
 Warning Failed     4m (x4 over 5m24s)   kubelet Error: ErrImagePull
 Warning Failed     3m31s (x6 over 5m23s)  kubelet Error: ImagePullBackOff
 Normal  BackOff     12s (x20 over 5m23s)  kubelet Back-off pulling image "registry.xx-hangzhou.xx.xx/google_containers/k8s-minikube/storage-provisioner:v4"
处理办法,通docker pull后把tag改成minikube要求的名称
docker pull registry.xx-hangzhou.aliyuncs.xx/google_containers/storage-provisioner:v4
 docker tag registry.xx-hangzhou.aliyuncs.xx/google_containers/storage-provisioner:v4 registry.xx-hangzhou.xx.xx/google_containers/k8s-minikube/storage-provisioner:v4
#删除,可不执行 
docker rmi registry.xx-hangzhou.xx.xx/google_containers/storage-provisioner:v4
ingress-nginx镜像同样也这么处理:
docker pull registry.xx-hangzhou.xx.xx/google_containers/nginx-ingress-controller:v0.40.2
docker tag registry.xx-hangzhou.xx.xx/google_containers/nginx-ingress-controller:v0.40.2 registry.xx-hangzhou.xx.xx/google_containers/controller:v0.40.2
4、安装kubevela
添加仓库
helm repo add kubevela charts.kubevela.net/core
更新
helm repo update
安装kubevela
helm install --create-namespace -n vela-system kubevela kubevela/vela-core
NAME: kubevela
LAST DEPLOYED: Sat Sep 4 23:18:36 2021
NAMESPACE: vela-system
STATUS: deployed
REVISION: 1
NOTES:
Welcome to use the KubeVela! Enjoy your shipping application journey!
安装一个vela app
kubectl apply -f raw.githubusercontent.xx/oamdev/kubevela/master/docs/examples/vela-app.yaml
raw.githubusercontent.xx折腾连不上,到github中clone kubevela源码到本地搞。
kubectl apply -f /Users/vinin/codes/go/kubevela/docs/examples/vela-app.yaml
application.core.oam.dev/first-vela-app created
kubectl get application first-vela-app -o yaml
检查状态:直到看到 status 是 running,并且services 是 healthy
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
 annotations:
  kubectl.kubernetes.io/last-applied-configuration: |
   {"apiVersion":"core.oam.dev/v1beta1","kind":"Application","metadata":{"annotations":{},"name":"first-vela-app","namespace":"default"},"spec":{"components":[{"name":"express-server","properties":{"image":"crccheck/hello-world","port":8000},"traits":[{"properties":{"domain":"testsvc.example.xx","http":{"/":8000}},"type":"ingress-1-20"}],"type":"webservice"}]}}
  oam.dev/kubevela-version: v1.1.0
 creationTimestamp: "2021-09-04T15:23:47Z"
 finalizers:
 \- app.oam.dev/resource-tracker-finalizer
 generation: 1
 name: first-vela-app
 namespace: default
 resourceVersion: "6454"
 uid: fddaa151-5773-438e-ab39-3ff90b75be78
spec:
 components:
 \- name: express-server
  properties:
   image: crccheck/hello-world
   port: 8000
  traits:
  \- properties:
    domain: testsvc.example.xx
    http:
     /: 8000
   type: ingress-1-20
  type: webservice
status:
 conditions:
 \- lastTransitionTime: "2021-09-04T15:23:47Z"
  reason: Available
  status: "True"
  type: Parsed
 \- lastTransitionTime: "2021-09-04T15:23:47Z"
  reason: Available
  status: "True"
  type: Revision
 \- lastTransitionTime: "2021-09-04T15:23:47Z"
  reason: Available
  status: "True"
  type: Render
 \- lastTransitionTime: "2021-09-04T15:23:48Z"
  reason: Available
  status: "True"
  type: Applied
 \- lastTransitionTime: "2021-09-04T15:23:48Z"
  reason: Available
  status: "True"
  type: HealthCheck
 latestRevision:
  name: first-vela-app-v1
  revision: 1
  revisionHash: 7fa5efd7492f9e40
 observedGeneration: 1
 rollout:
  batchRollingState: ""
  currentBatch: 0
  lastTargetAppRevision: ""
  rollingState: ""
  upgradedReadyReplicas: 0
  upgradedReplicas: 0
 services:
 \- healthy: true
  name: express-server
  traits:
  \- healthy: true
   message: |
    No loadBalancer found, visiting by using 'vela port-forward first-vela-app --route'
   type: ingress-1-20
  workloadDefinition:
   apiVersion: apps/v1
   kind: Deployment
 status: running
检查在k8s中创建的资源
% kubectl get deployment
NAME       READY  UP-TO-DATE  AVAILABLE  AGE
express-server  1/1   1      1      2m17s
 % kubectl get svc
NAME       TYPE    CLUSTER-IP    EXTERNAL-IP  PORT(S)  AGE
express-server  ClusterIP  10.106.163.253  <none>    8000/TCP  2m39s
kubernetes    ClusterIP  10.96.0.1    <none>    443/TCP  5h21m
% kubectl get ingress
NAME       CLASS  HOSTS         ADDRESS    PORTS  AGE
express-server  <none>  testsvc.example.xx  192.168.64.3  80   2m56s
测试vela app (express helloworld)是否已经正确安装
% curl -H "Host:testsvc.example.xx" http://192.168.64.3
Hello World
                                       ##         .
                                 ## ## ##        ==
                              ## ## ## ## ##    ===
                           /""""""""""""""""\___/ ===
                      ~~~ {~~ ~~~~ ~~~ ~~~~ ~~ ~ /  ===- ~~~
                           \______ o          _,/
                            \      \       _,'
                             `'--.._\..--''
四、安装kubevela客户端
curl -fsSl kubevela.io/script/install.sh | bash
使用vela cli:
% vela components
NAME   NAMESPACE WORKLOAD        DESCRIPTION                         
raw    vela-system autodetects.core.oam.dev raw allow users to specify raw K8s object in properties   
task   vela-system jobs.batch       Describes jobs that run code or a script to completion.   
webservice vela-system deployments.apps    Describes long-running, scalable, containerized services   
                       that have a stable network endpoint to receive external   
                       network traffic from customers.               
worker  vela-system deployments.apps    Describes long-running, scalable, containerized services   
                       that running at backend. They do NOT have network endpoint  
                       to receive external network traffic.