Kubernetes calico 节点问题 - 运行 0/1

问题描述

嗨,我在 ubuntu 20.04 的本地服务器中有两个虚拟机,我想为我的微服务构建一个小型集群。我运行了以下步骤来设置我的集群,但是我遇到了 calico-nodes 的问题。他们正在运行 0/1/

master.domain.com

  • Ubuntu 20.04
  • docker --version = Docker 版本 20.10.7,构建 f0df350
  • kubectl version = Client Version: version.Info{Major:"1",Minor:"20",GitVersion:"v1.20.4",GitCommit:"e87da0bd6e03ec3fea7933c4b5263d151aafd07c",GitTreeState:"clean2",Build1Date 02-18T16:12:00Z",GoVersion:"go1.15.8",编译器:"gc",平台:"linux/amd64"}

worker.domain.com

  • Ubuntu 20.04
  • docker --version = Docker 版本 20.10.2,构建 20.10.2-0ubuntu1~20.04.2
  • kubectl version = Client Version: version.Info{Major:"1",平台:"linux/amd64"}

第 1 步

ma​​ster.domain.com 虚拟机中,我运行以下命令

sudo kubeadm init --pod-network-cidr=192.168.0.0/16
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
curl https://docs.projectcalico.org/manifests/calico.yaml -O
kubectl apply -f calico.yaml
kubectl get pods --all-namespaces -o wide
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE     IP               NODE    NOMINATED NODE   READInesS GATES
kube-system   calico-kube-controllers-7f4f5bf95d-gnll8   1/1     Running   0          38s     192.168.29.195   master   <none>           <none>
kube-system   calico-node-7zmtm                          1/1     Running   0          38s     195.251.3.255    master   <none>           <none>
kube-system   coredns-74ff55c5b-ltn9g                    1/1     Running   0          3m49s   192.168.29.193   master   <none>           <none>
kube-system   coredns-74ff55c5b-nkhzf                    1/1     Running   0          3m49s   192.168.29.194   master   <none>           <none>
kube-system   etcd-kubem                                 1/1     Running   0          4m6s    195.251.3.255    master   <none>           <none>
kube-system   kube-apiserver-kubem                       1/1     Running   0          4m6s    195.251.3.255    master   <none>           <none>
kube-system   kube-controller-manager-kubem              1/1     Running   0          4m6s    195.251.3.255    master   <none>           <none>
kube-system   kube-proxy-2cr2x                           1/1     Running   0          3m49s   195.251.3.255    master   <none>           <none>
kube-system   kube-scheduler-kubem                       1/1     Running   0          4m6s    195.251.3.255    master   <none>           <none>

第 2 步worker.domain.com 虚拟机中,我运行以下命令

sudo kubeadm join 195.251.3.255:6443 --token azuist.xxxxxxxxxxx  --discovery-token-ca-cert-hash sha256:xxxxxxxxxxxxxxxxxxxxxxxxxxxx

第 3 步ma​​ster.domain.com 虚拟机中,我运行以下命令

kubectl get pods --all-namespaces -o wide
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE     IP               NODE    NOMINATED NODE   READInesS GATES
kube-system   calico-kube-controllers-7f4f5bf95d-gnll8   1/1     Running   0          6m37s   192.168.29.195   master   <none>           <none>
kube-system   calico-node-7zmtm                          0/1     Running   0          6m37s   195.251.3.255    master   <none>           <none>
kube-system   calico-node-wccnb                          0/1     Running   0          2m19s   195.251.3.230    worker   <none>           <none>
kube-system   coredns-74ff55c5b-ltn9g                    1/1     Running   0          9m48s   192.168.29.193   master   <none>           <none>
kube-system   coredns-74ff55c5b-nkhzf                    1/1     Running   0          9m48s   192.168.29.194   master   <none>           <none>
kube-system   etcd-kubem                                 1/1     Running   0          10m     195.251.3.255    master   <none>           <none>
kube-system   kube-apiserver-kubem                       1/1     Running   0          10m     195.251.3.255    master   <none>           <none>
kube-system   kube-controller-manager-kubem              1/1     Running   0          10m     195.251.3.255    master   <none>           <none>
kube-system   kube-proxy-2cr2x                           1/1     Running   0          9m48s   195.251.3.255    master   <none>           <none>
kube-system   kube-proxy-kxw4m                           1/1     Running   0          2m19s   195.251.3.230    worker   <none>           <none>
kube-system   kube-scheduler-kubem                       1/1     Running   0          10m     195.251.3.255    master   <none>           <none>

kubectl logs -n kube-system calico-node-7zmtm
...
...
2021-06-20 17:10:25.064 [INFO][56] monitor-addresses/startup.go 774: Using autodetected IPv4 address on interface eth0: 195.251.3.255/24
2021-06-20 17:10:34.862 [INFO][53] Felix/summary.go 100: Summarising 11 dataplane reconciliation loops over 1m3.5s: avg=4ms longest=13ms ()
kubectl logs -n kube-system calico-node-wccnb
...
...
2021-06-20 17:10:59.818 [INFO][55] Felix/summary.go 100: Summarising 8 dataplane reconciliation loops over 1m3.6s: avg=3ms longest=13ms (resync-filter-v4,resync-nat-v4,resync-raw-v4)
2021-06-20 17:11:05.994 [INFO][51] monitor-addresses/startup.go 774: Using autodetected IPv4 address on interface br-9a88318dda68: 172.21.0.1/16

如您所见,对于两个 calico 节点,我都运行了 0/1,为什么??

知道如何解决这个问题吗?

谢谢

解决方法

暂无找到可以解决该程序问题的有效方法,小编努力寻找整理中!

如果你已经找到好的解决方法,欢迎将解决方案带上本链接一起发送给小编。

小编邮箱:dio#foxmail.com (将#修改为@)