helm安装
参考: istio.io/latest/zh/docs/setup/install/helm/
- 添加helm repo
helm repo add istio istio-release.storage.googleapis.com/charts
helm repo update
- 安装istio-base
helm install istio-base istio/base -n istio-system --set defaultRevision=default --create-namespace
helm ls -n istio-system
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
istio-base istio-system 1 2025-01-16 09:27:52.33974539 +0000 UTC deployed base-1.24.2 1.24.2
- 安装istiod
helm install istiod istio/istiod -n istio-system --wait
helm ls -n istio-system
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
istio-base istio-system 1 2025-01-16 09:27:52.33974539 +0000 UTC deployed base-1.24.2 1.24.2
istiod istio-system 1 2025-01-16 09:29:30.322051368 +0000 UTC deployed istiod-1.24.2 1.24.2
# 确认安装成功
kubectl get deployments -n istio-system --output wide
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
istiod 1/1 1 1 51s discovery docker.io/istio/pilot:1.24.2 istio=pilot
- 安装 Kubernetes Gateway API CRD
kubectl get crd gateways.gateway.networking.k8s.io &> /dev/null || \
{ kubectl apply -f github.com/kubernetes-sigs/gateway-api/releases/download/v1.2.0/standard-install.yaml; }
流量治理测试 (使用的是gatewayAPI的模式)
- download istio安装包, 包含example文件
github.com/istio/istio/releases/tag/1.24.2
tar -zxvf istio-1.24.2-linux-amd64.tar.gz
cd istio-1.24
- 创建bookinfo应用, 测试clusterIP访问正常
$ kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml
service/details created
serviceaccount/bookinfo-details created
deployment.apps/details-v1 created
service/ratings created
serviceaccount/bookinfo-ratings created
deployment.apps/ratings-v1 created
service/reviews created
serviceaccount/bookinfo-reviews created
deployment.apps/reviews-v1 created
deployment.apps/reviews-v2 created
deployment.apps/reviews-v3 created
service/productpage created
serviceaccount/bookinfo-productpage created
deployment.apps/productpage-v1 created
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
details ClusterIP 10.107.18.243 <none> 9080/TCP 18s
productpage ClusterIP 10.106.86.173 <none> 9080/TCP 17s
ratings ClusterIP 10.108.56.118 <none> 9080/TCP 18s
reviews ClusterIP 10.106.65.77 <none> 9080/TCP 18s
$ kubectl get pods
details-v1-79dfbd6fff-s4f67 2/2 Running 0 100s
productpage-v1-dffc47f64-7szr5 2/2 Running 0 48s
ratings-v1-65f797b499-h9fvc 2/2 Running 0 49s
reviews-v1-5c4d6d447c-zkcjf 2/2 Running 0 48s
reviews-v2-65cb66b45c-q9kxq 2/2 Running 0 48s
reviews-v3-f68f94645-cm9dk 2/2 Running 0 48s
$ kubectl exec "$(kubectl get pod -l app=ratings -o jsonpath='{.items[0].metadata.name}')" -c ratings -- curl -sS productpage:9080/productpage | grep -o "<title>.*</title>"
<title>Simple Bookstore App</title>
- 通过gateway开放应用访问, 注解service为NodePort模式
$ kubectl apply -f samples/bookinfo/gateway-api/bookinfo-gateway.yaml
gateway.gateway.networking.k8s.io/bookinfo-gateway created
httproute.gateway.networking.k8s.io/bookinfo created
通过注解网关将服务类型更改为 `NodePort`:
$ kubectl annotate gateway bookinfo-gateway networking.istio.io/service-type=NodePort --namespace=default
$ kubectl get gateway
NAME CLASS ADDRESS PROGRAMMED AGE
bookinfo-gateway istio bookinfo-gateway-istio.default.svc.cluster.local True 54s
查看svc的nodePort
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
bookinfo-gateway-istio NodePort 10.109.117.251 <none> 15021:30336/TCP,80:30609/TCP 92s
- 配置NAT, 设置转发, 由于跨节点池的NodePort访问问题, 需要修改pod调度到中心网关, 并且设置中心VPC的Dnat
$ kubectl edit deploy bookinfo-gateway-istio
nodeSelector:
openyurt.io/is-edge-worker: "false"
# 访问测试, 多次刷新会看到轮训到 review的 v1 v2 v3版本, 区别是评论的颜不一样
{{nat_ip}}:30609/productpage
配置请求路由
- 定义服务版本
kubectl apply -f samples/bookinfo/platform/kube/bookinfo-versions.yaml
$ kubectl get svc | grep v1
details-v1 ClusterIP 10.104.94.225 <none> 9080/TCP 64s
productpage-v1 ClusterIP 10.99.215.89 <none> 9080/TCP 64s
ratings-v1 ClusterIP 10.111.178.90 <none> 9080/TCP 64s
reviews-v1 ClusterIP 10.109.86.176 <none> 9080/TCP 64s
- 运行以下命令以创建路由规则
kubectl apply -f - <<EOF
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: reviews
spec:
parentRefs:
- group: ""
kind: Service
name: reviews
port: 9080
rules:
- backendRefs:
- name: reviews-v1
port: 9080
EOF
$ kubectl get httproute reviews -o yaml
...
spec:
parentRefs:
- group: gateway.networking.k8s.io
kind: Service
name: reviews
port: 9080
rules:
- backendRefs:
- group: ""
kind: Service
name: reviews-v1
port: 9080
weight: 1
matches:
- path:
type: PathPrefix
value: /
status:
parents:
- conditions:
- lastTransitionTime: "2022-11-08T19:56:19Z"
message: Route was valid
observedGeneration: 8
reason: Accepted
status: "True"
type: Accepted
- lastTransitionTime: "2022-11-08T19:56:19Z"
message: All references resolved
observedGeneration: 8
reason: ResolvedRefs
status: "True"
type: ResolvedRefs
controllerName: istio.io/gateway-controller
parentRef:
group: gateway.networking.k8s.io
kind: Service
name: reviews
port: 9080
- 此时刷新页面, 将不会看到星级评分, 所有的流量都被流到v1了,
- 配置基于用户身份的路由, 用jason登录的用户将回看到v2的版本(黑)
kubectl apply -f - <<EOF
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: reviews
spec:
parentRefs:
- group: ""
kind: Service
name: reviews
port: 9080
rules:
- matches:
- headers:
- name: end-user
value: jason
backendRefs:
- name: reviews-v2
port: 9080
- backendRefs:
- name: reviews-v1
port: 9080
EOF
流量转移(应用基于权重的路由)
- 首先,运行此命令将所有流量路由到各个微服务的
v1
版本。
kubectl apply -f samples/bookinfo/gateway-api/route-reviews-v1.yaml
- 使用下面的命令把 50% 的流量从
reviews:v1
转移到reviews:v3
:
$ kubectl get httproute reviews -o yaml
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
generation: 4
name: reviews
spec:
parentRefs:
- group: ""
kind: Service
name: reviews
port: 9080
rules:
- backendRefs:
- group: ""
kind: Service
name: reviews-v1
port: 9080
weight: 50
- group: ""
kind: Service
name: reviews-v3
port: 9080
weight: 50
matches:
- path:
type: PathPrefix
value: /
status:
parents:
- conditions:
- lastTransitionTime: "2025-01-16T12:34:57Z"
message: Route was valid
observedGeneration: 4
reason: Accepted
status: "True"
type: Accepted
- lastTransitionTime: "2025-01-16T12:41:34Z"
message: All references resolved
observedGeneration: 4
reason: ResolvedRefs
status: "True"
type: ResolvedRefs
controllerName: istio.io/gateway-controller
parentRef:
group: ""
kind: Service
name: reviews
port: 9080
- 刷新浏览器中的
/productpage
页面,大约有 50% 的几率会看到页面中带红星级的评价内容 - 切换100%流量到v3版本
kubectl apply -f samples/bookinfo/gateway-api/route-reviews-v3.yaml
地域LoadBalance
kubectl apply -f - <<EOF
apiVersion: v1
kind: Namespace
metadata:
name: sample
labels:
istio-injection: enabled
EOF
- 部署流量pod
for LOC in "region1.zone1" "region2.zone2" "region3.zone3"; \
do \
samples/helloworld/gen-helloworld.sh \
--version "$LOC" > "helloworld-${LOC}.yaml"; \
done
# 修改每一个yaml, 增加对应zone的, nodeSelector
spec:
nodeSelector:
topology.kubernetes.io/zone: fj-xm4
kubectl apply -n sample -f helloworld-region1.zone1.yaml
kubectl apply -n sample -f helloworld-region2.zone2.yaml
kubectl apply -n sample -f helloworld-region3.zone3.yaml
- 部署curl pod, 使其调度到region1
kubectl apply -f samples/curl/curl.yaml -n sample
$ kubectl -n sample edit deploy curl
spec:
nodeSelector:
topology.kubernetes.io/zone: fj-xm4
kubectl -n sample get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
curl-6dfd85775f-lkcp7 2/2 Running 0 43s 192.168.1.190 fj-xm4-lzytest-0001
- 部署dr, 此时由于开启了localityLbSetting.enable=true, 流量闭环生效了, 流量只会流到curl pod所在的region
kubectl apply -n sample -f - <<EOF
apiVersion: networking.istio.io/v1
kind: DestinationRule
metadata:
name: helloworld
spec:
host: helloworld.sample.svc.cluster.local
trafficPolicy:
connectionPool:
http:
maxRequestsPerConnection: 1
loadBalancer:
simple: ROUND_ROBIN
localityLbSetting:
enabled: true
failover:
- from: region1.zone1
to: region2.zone2
outlierDetection:
consecutive5xxErrors: 1
interval: 1s
baseEjectionTime: 1m
EOF
- 验证流量目前保持在
region1.zone1
kubectl exec -n sample -c curl "$(kubectl get pod -n sample -l app=curl -o jsonpath='{.items[0].metadata.name}')" -- curl -sSL helloworld.sample:5000/hello
Hello version: region1.zone1, instance: helloworld-region1.zone1-86f77cd7b-cpxhv
- 通过修改localityLbSetting.enable=false, 实现流量在所有区域内流转
kubectl exec -n sample -c curl "$(kubectl get pod -n sample -l app=curl -o jsonpath='{.items[0].metadata.name}')" -- curl -sSL helloworld.sample:5000/hello
Hello version: region3.zone3, instance: helloworld-region3.zone3-7b97f566fb-b8w99
kubectl exec -n sample -c curl "$(kubectl get pod -n sample -l app=curl -o jsonpath='{.items[0].metadata.name}')" -- curl -sSL helloworld.sample:5000/hello
Hello version: region1.zone1, instance: helloworld-region1.zone1-6dd64d58c5-w9pww
kubectl exec -n sample -c curl "$(kubectl get pod -n sample -l app=curl -o jsonpath='{.items[0].metadata.name}')" -- curl -sSL helloworld.sample:5000/hello
Hello version: region2.zone2, instance: helloworld-region2.zone2-69f46844cc-l2nnf
- 改回localityLbSetting.enable=true, 并对region1.zone1的pod注入故障,
kubectl exec -n sample "$(kubectl get pod -n sample -l app=helloworld -l version=region1.zone1 -o jsonpath='{.items[0].metadata.name}')" -c istio-proxy -- curl -sSL -X POST 127.0.0.1:15000/drain_listeners
# 可以看到在缓冲期时间内, 获得了一个连接错误的报错
$ kubectl exec -n sample -c curl "$(kubectl get pod -n sample -l app=curl -o jsonpath='{.items[0].metadata.name}')" -- curl -sSL helloworld.sample:5000/hello
upstream connect error or disconnect/reset before headers. retried and the latest reset reason: remote connection failure, transport failure reason: delayed connect error: Connection refused