searchusermenu
  • 发布文章
  • 消息中心
点赞
收藏
评论
分享
原创

k8s集群添加master节点报control plane 错误

2024-05-22 03:16:20
103
0

背景介绍

在刚部署的k8s集群中添加新的master节点时,报了error execution phase preflight: One or more conditions for hosting a new control plane instance is not satisfied的错误,接下来就针对此问题进行解决说明。

环境介绍

k8s版本

kubectl version
 Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.5", GitCommit:"xxx", GitTreeState:"clean", BuildDate:"2022-03-16T15:58:47Z", GoVersion:"go1.17.8", Compiler:"gc", Platform:"linux/amd64"}

部署方式

kubeadm

当前节点信息

kubectl get node
 NAME          STATUS   ROLES                  AGE     VERSION
 k8s-master1   Ready    control-plane,master   24h     v1.23.5

master节点加入

#第1步 打印加入集群的相关信息
 kubeadm token create --print-join-command
 kubeadm join xx.xx.xx.xx:6443 --token xxxx.xxxx --discovery-token-ca-cert-hash sha256:xxx
 ​
 #第2步 打印加入master的certs信息
 kubeadm init phase upload-certs --upload-certs
 I0520 14:51:22.848075    1096 version.go:255] remote version is much newer: v1.30.1; falling back to: stable-1.23
 [upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
 [upload-certs] Using certificate key:
 aaaaaxxxxxxxxxxxxxxxxxx
 ​
 #第3步 信息拼接(第一步和第二步)
 kubeadm token create --print-join-command
 kubeadm join xx.xx.xx.xx:6443 --token xxxx.xxxx --discovery-token-ca-cert-hash sha256:xxx --control-plane --certificate-key aaaaaxxxxxxxxxxxxxxxxxx
 ​
 #第4步 在待加入节点上执行上面第3步的命令,然后报错信息如下:
 [preflight] Running pre-flight checks
     [WARNING Hostname]: hostname "xxx" could not be reached
     [WARNING Hostname]: hostname "xxx": lookup xxx on 114.114.114.114:53: no such host
     [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
 [preflight] Reading configuration from the cluster...
 [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
 error execution phase preflight: 
 One or more conditions for hosting a new control plane instance is not satisfied.
 ​
 unable to add a new control plane instance to a cluster that doesn't have a stable controlPlaneEndpoint address
 ​
 Please ensure that:
 * The cluster has a stable controlPlaneEndpoint address.
 * The certificates that must be shared among control plane instances are provided.
 ​
 ​
 To see the stack trace of this error execute with --v=5 or higher

解决方法

方法1

#1 查看 kubeadm-config的信息(只截取了一部分信息)
 kubectl get  cm kubeadm-config -n kube-system -oyaml
 ​
 apiVersion: v1
 data:
   ClusterConfiguration: |
     apiServer:
       extraArgs:
         authorization-mode: Node,RBAC
       timeoutForControlPlane: 4m0s
     apiVersion: kubeadm.k8s.io/v1beta3
     certificatesDir: /etc/kubernetes/pki
     clusterName: kubernetes
     controllerManager: {}
     dns: {}
     etcd:
       local:
         dataDir: /var/lib/etcd
     imageRepository: xxx
     kind: ClusterConfiguration
     kubernetesVersion: v1.23.5
     networking:
       dnsDomain: cluster.local
       podSubnet: xx.xx.0.0/16
       serviceSubnet: xx.xx.0.0/16
     scheduler: {}
 ​
 #2 添加controlPlaneEndpoint
 kubectl edit cm kubeadm-config -n kube-system
 ​
 apiVersion: v1
 data:
   ClusterConfiguration: |
     apiServer:
       extraArgs:
         authorization-mode: Node,RBAC
       timeoutForControlPlane: 4m0s
     apiVersion: kubeadm.k8s.io/v1beta3
     certificatesDir: /etc/kubernetes/pki
     clusterName: kubernetes
     controllerManager: {}
     dns: {}
     etcd:
       local:
         dataDir: /var/lib/etcd
     imageRepository: xxx
     kind: ClusterConfiguration
     kubernetesVersion: v1.23.5
     #在下面添加一行信息
     controlPlaneEndpoint: "xx.xx.xx.xx:port" ## 换成自己的ip和port
     networking:
       dnsDomain: cluster.local
       podSubnet: xx.xx.0.0/16
       serviceSubnet: xx.xx.0.0/16
     scheduler: {}

方法2

如果k8s集群中只有一个节点,可将k8s集群重置,然后重新初始化

#1 重置k8s,然后删除相关文件
 kubeadm reset -f
 rm -rf /etc/kubernetes
 rm -rf ~/.kube
 ​
 #2 重新初始化
 kubeadm init --kubernetes-version 1.23.5 --control-plane-endpoint "xx.xx.xx.xx:port" --pod-network-cidr=xx.xx.xx.xx/16 --service-cidr=xx.xx.xx.xx/16 --upload-certs
  
0条评论
作者已关闭评论
SummerSnow
8文章数
0粉丝数
SummerSnow
8 文章 | 0 粉丝
原创

k8s集群添加master节点报control plane 错误

2024-05-22 03:16:20
103
0

背景介绍

在刚部署的k8s集群中添加新的master节点时,报了error execution phase preflight: One or more conditions for hosting a new control plane instance is not satisfied的错误,接下来就针对此问题进行解决说明。

环境介绍

k8s版本

kubectl version
 Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.5", GitCommit:"xxx", GitTreeState:"clean", BuildDate:"2022-03-16T15:58:47Z", GoVersion:"go1.17.8", Compiler:"gc", Platform:"linux/amd64"}

部署方式

kubeadm

当前节点信息

kubectl get node
 NAME          STATUS   ROLES                  AGE     VERSION
 k8s-master1   Ready    control-plane,master   24h     v1.23.5

master节点加入

#第1步 打印加入集群的相关信息
 kubeadm token create --print-join-command
 kubeadm join xx.xx.xx.xx:6443 --token xxxx.xxxx --discovery-token-ca-cert-hash sha256:xxx
 ​
 #第2步 打印加入master的certs信息
 kubeadm init phase upload-certs --upload-certs
 I0520 14:51:22.848075    1096 version.go:255] remote version is much newer: v1.30.1; falling back to: stable-1.23
 [upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
 [upload-certs] Using certificate key:
 aaaaaxxxxxxxxxxxxxxxxxx
 ​
 #第3步 信息拼接(第一步和第二步)
 kubeadm token create --print-join-command
 kubeadm join xx.xx.xx.xx:6443 --token xxxx.xxxx --discovery-token-ca-cert-hash sha256:xxx --control-plane --certificate-key aaaaaxxxxxxxxxxxxxxxxxx
 ​
 #第4步 在待加入节点上执行上面第3步的命令,然后报错信息如下:
 [preflight] Running pre-flight checks
     [WARNING Hostname]: hostname "xxx" could not be reached
     [WARNING Hostname]: hostname "xxx": lookup xxx on 114.114.114.114:53: no such host
     [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
 [preflight] Reading configuration from the cluster...
 [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
 error execution phase preflight: 
 One or more conditions for hosting a new control plane instance is not satisfied.
 ​
 unable to add a new control plane instance to a cluster that doesn't have a stable controlPlaneEndpoint address
 ​
 Please ensure that:
 * The cluster has a stable controlPlaneEndpoint address.
 * The certificates that must be shared among control plane instances are provided.
 ​
 ​
 To see the stack trace of this error execute with --v=5 or higher

解决方法

方法1

#1 查看 kubeadm-config的信息(只截取了一部分信息)
 kubectl get  cm kubeadm-config -n kube-system -oyaml
 ​
 apiVersion: v1
 data:
   ClusterConfiguration: |
     apiServer:
       extraArgs:
         authorization-mode: Node,RBAC
       timeoutForControlPlane: 4m0s
     apiVersion: kubeadm.k8s.io/v1beta3
     certificatesDir: /etc/kubernetes/pki
     clusterName: kubernetes
     controllerManager: {}
     dns: {}
     etcd:
       local:
         dataDir: /var/lib/etcd
     imageRepository: xxx
     kind: ClusterConfiguration
     kubernetesVersion: v1.23.5
     networking:
       dnsDomain: cluster.local
       podSubnet: xx.xx.0.0/16
       serviceSubnet: xx.xx.0.0/16
     scheduler: {}
 ​
 #2 添加controlPlaneEndpoint
 kubectl edit cm kubeadm-config -n kube-system
 ​
 apiVersion: v1
 data:
   ClusterConfiguration: |
     apiServer:
       extraArgs:
         authorization-mode: Node,RBAC
       timeoutForControlPlane: 4m0s
     apiVersion: kubeadm.k8s.io/v1beta3
     certificatesDir: /etc/kubernetes/pki
     clusterName: kubernetes
     controllerManager: {}
     dns: {}
     etcd:
       local:
         dataDir: /var/lib/etcd
     imageRepository: xxx
     kind: ClusterConfiguration
     kubernetesVersion: v1.23.5
     #在下面添加一行信息
     controlPlaneEndpoint: "xx.xx.xx.xx:port" ## 换成自己的ip和port
     networking:
       dnsDomain: cluster.local
       podSubnet: xx.xx.0.0/16
       serviceSubnet: xx.xx.0.0/16
     scheduler: {}

方法2

如果k8s集群中只有一个节点,可将k8s集群重置,然后重新初始化

#1 重置k8s,然后删除相关文件
 kubeadm reset -f
 rm -rf /etc/kubernetes
 rm -rf ~/.kube
 ​
 #2 重新初始化
 kubeadm init --kubernetes-version 1.23.5 --control-plane-endpoint "xx.xx.xx.xx:port" --pod-network-cidr=xx.xx.xx.xx/16 --service-cidr=xx.xx.xx.xx/16 --upload-certs
  
文章来自个人专栏
文章 | 订阅
0条评论
作者已关闭评论
作者已关闭评论
0
0