Kubernetes集群主节点显示-NotReady,coredns和weave显示未决

问题描述

我已经安装了Kubernetes cluster on CentOS-8,但是节点状态显示NotReady,命名空间状态为coredns显示pendingweave-net状态显示CrashLoopBackOff。我也已经重新安装了,但是taint命令仍然无法正常运行。我该如何解决这个问题?

# kubectl get nodes
NAME          STATUS     ROLES    AGE   VERSION
K8s-Master   NotReady   master   42m   v1.18.8

# kubectl get pods -o wide --all-namespaces
NAMESPACE     NAME                                  READY   STATUS             RESTARTS   AGE   IP                NODE          NOMINATED NODE   READInesS GATES
kube-system   coredns-66bff467f8-5vtjf              0/1      Pending            0          42m   <none>            <none>        <none>           <none>
kube-system   coredns-66bff467f8-pr6pt              0/1      Pending            0          42m   <none>            <none>        <none>           <none>
kube-system   etcd-K8s-Master                       1/1      Running            0          42m   90.91.92.93   K8s-Master        <none>           <none>
kube-system   kube-apiserver-K8s-Master             1/1      Running            0          42m   90.91.92.93   K8s-Master        <none>           <none>
kube-system   kube-controller-manager-K8s-Master    1/1      Running            0          42m   90.91.92.93   K8s-Master        <none>           <none>
kube-system   kube-proxy-pw2bk                      1/1      Running            0          42m   90.91.92.93   K8s-Master        <none>           <none>
kube-system   kube-scheduler-K8s-Master             1/1      Running            0          42m   90.91.92.93   K8s-Master        <none>           <none>
kube-system   weave-net-k4mdf                       1/2      CrashLoopBackOff   12         41m   90.91.92.93   K8s-Master        <none>           <none>

# kubectl describe pod coredns-66bff467f8-pr6pt --namespace=kube-system
Events:
  Type     Reason            Age                 From               Message
  ----     ------            ----                ----               -------
  Warning  FailedScheduling  70s (x33 over 43m)  default-scheduler  0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: },that the pod didn't tolerate.

# kubectl describe node | grep -i taint
Taints:             node.kubernetes.io/not-ready:NoExecute

# kubectl taint nodes --all node.kubernetes.io/not-ready:NoExecute
error: node K8s-Master already has node.kubernetes.io/not-ready taint(s) with same effect(s) and --overwrite is false

# kubectl describe pod weave-net-k4mdf --namespace=kube-system
Events:
  Type     Reason     Age                   From                  Message
  ----     ------     ----                  ----                  -------
  normal   Scheduled  43m                   default-scheduler    Successfully assigned kube-system/weave-net-k4mdf to K8s-Master
  normal   Pulling    43m                   kubelet,K8s-Master  Pulling image "docker.io/weaveworks/weave-kube:2.7.0"
  normal   Pulled     43m                   kubelet,K8s-Master  Successfully pulled image "docker.io/weaveworks/weave-kube:2.7.0"
  normal   Pulling    43m                   kubelet,K8s-Master  Pulling image "docker.io/weaveworks/weave-npc:2.7.0"
  normal   Pulled     42m                   kubelet,K8s-Master  Successfully pulled image "docker.io/weaveworks/weave-npc:2.7.0"
  normal   Started    42m                   kubelet,K8s-Master  Started container weave-npc
  normal   Created    42m                   kubelet,K8s-Master  Created container weave-npc
  normal   Started    42m (x4 over 43m)     kubelet,K8s-Master  Started container weave
  normal   Created    42m (x4 over 43m)     kubelet,K8s-Master  Created container weave
  normal   Pulled     42m (x3 over 42m)     kubelet,K8s-Master  Container image "docker.io/weaveworks/weave-kube:2.7.0" already present on machine
  Warning  BackOff    3m1s (x191 over 42m)  kubelet,K8s-Master  Back-off restarting Failed container
  normal   Pulled     33s (x4 over 118s)    kubelet,K8s-Master  Container image "docker.io/weaveworks/weave-kube:2.7.0" already present on machine
  normal   Created    33s (x4 over 118s)    kubelet,K8s-Master  Created container weave
  normal   Started    33s (x4 over 118s)    kubelet,K8s-Master  Started container weave
  Warning  BackOff    5s (x10 over 117s)    kubelet,K8s-Master  Back-off restarting Failed container

# kubectl logs weave-net-k4mdf -c weave --namespace=kube-system
ipset v7.2: Set cannot be destroyed: it is in use by a kernel component

解决方法

ipset v7.2: Set cannot be destroyed: it is in use by a kernel component

上面的错误是由于竞争状况造成的。

this issue开始引用,您可以编辑weave守护进程yaml以作为解决方法添加到下面。

              command:
                - /bin/sh
                - -c
                - sed '/ipset destroy weave-kube-test$/ i sleep 1' /home/weave/launch.sh | /bin/sh

所以编织守护进程看起来像

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: weave-net
  annotations:
    cloud.weave.works/launcher-info: |-
      {
        "original-request": {
          "url": "/k8s/v1.13/net.yaml","date": "Fri Aug 14 2020 07:36:34 GMT+0000 (UTC)"
        },"email-address": "[email protected]"
      }
  labels:
    name: weave-net
  namespace: kube-system
spec:
  minReadySeconds: 5
  selector:
    matchLabels:
      name: weave-net
  template:
    metadata:
      labels:
        name: weave-net
    spec:
      containers:
        - name: weave
          command:
            - /bin/sh
            - -c
            - sed '/ipset destroy weave-kube-test$/ i sleep 1' /home/weave/launch.sh | /bin/sh
...