pvc 陷入等待创建卷的等待状态,无论是通过外部供应商“rook-ceph.rbd.csi.ceph.com”还是手动

问题描述

我使用 rook 构建了一个 ceph 集群。但是我的 pvc 卡在了待处理状态。当我使用 kubectl describe pvc 时,我发现了来自 Persistentvolume-controller 的事件:

waiting for a volume to be created,either by external provisioner "rook-ceph.rbd.csi.ceph.com" or manually created by system administrator

我所有的 Pod 都处于运行状态:

NAME                                                     READY   STATUS      RESTARTS   AGE
csi-cephfsplugin-ntqk6                                   3/3     Running     0          14d
csi-cephfsplugin-pqxdw                                   3/3     Running     6          14d
csi-cephfsplugin-provisioner-c68f789b8-dt4jf             6/6     Running     49         14d
csi-cephfsplugin-provisioner-c68f789b8-rn42r             6/6     Running     73         14d
csi-rbdplugin-6pgf4                                      3/3     Running     0          14d
csi-rbdplugin-l8fkm                                      3/3     Running     6          14d
csi-rbdplugin-provisioner-6c75466c49-tzqcr               6/6     Running     106        14d
csi-rbdplugin-provisioner-6c75466c49-x8675               6/6     Running     17         14d
rook-ceph-crashcollector-compute08.dc-56b86f7c4c-9mh2j   1/1     Running     2          12d
rook-ceph-crashcollector-compute09.dc-6998676d86-wpsrs   1/1     Running     0          12d
rook-ceph-crashcollector-compute10.dc-684599bcd8-7hzlc   1/1     Running     0          12d
rook-ceph-mgr-a-69fd54cccf-tjkxh                         1/1     Running     200        12d
rook-ceph-mon-at-8568b88589-2bm5h                        1/1     Running     0          4d3h
rook-ceph-mon-av-7b4444c8f4-2mlpc                        1/1     Running     0          4d1h
rook-ceph-mon-aw-7df9f76fcd-zzmkw                        1/1     Running     0          4d1h
rook-ceph-operator-7647888f87-zjgsj                      1/1     Running     1          15d
rook-ceph-osd-0-6db4d57455-p4cz9                         1/1     Running     2          12d
rook-ceph-osd-1-649d74dc6c-5r9dj                         1/1     Running     0          12d
rook-ceph-osd-2-7c57d4498c-dh6nk                         1/1     Running     0          12d
rook-ceph-osd-prepare-compute08.dc-gxt8p                 0/1     Completed   0          3h9m
rook-ceph-osd-prepare-compute09.dc-wj2fp                 0/1     Completed   0          3h9m
rook-ceph-osd-prepare-compute10.dc-22kth                 0/1     Completed   0          3h9m
rook-ceph-tools-6b4889fdfd-d6xdg                         1/1     Running     0          12d

这是kubectl logs -n rook-ceph csi-cephfsplugin-provisioner-c68f789b8-dt4jf csi-provisioner

I0120 11:57:13.283362       1 csi-provisioner.go:121] Version: v2.0.0
I0120 11:57:13.283493       1 csi-provisioner.go:135] Building kube configs for running in cluster...
I0120 11:57:13.294506       1 connection.go:153] Connecting to unix:///csi/csi-provisioner.sock
I0120 11:57:13.294984       1 common.go:111] Probing CSI driver for readiness
W0120 11:57:13.296379       1 metrics.go:142] metrics endpoint will not be started because `metrics-address` was not specified.
I0120 11:57:13.299629       1 leaderelection.go:243] attempting to acquire leader lease  rook-ceph/rook-ceph-cephfs-csi-ceph-com...

这是工具箱容器中的 ceph 状态:

cluster:
    id:     0b71fd4c-9731-4fea-81a7-1b5194e14204
    health: HEALTH_ERR
            Module 'dashboard' has Failed: [('x509 certificate routines','X509_check_private_key','key values mismatch')]
            Degraded data redundancy: 2/6 objects degraded (33.333%),1 pg degraded,1 pg undersized
            1 pgs not deep-scrubbed in time
            1 pgs not scrubbed in time
  services:
    mon: 3 daemons,quorum at,av,aw (age 4d)
    mgr: a(active,since 4d)
    osd: 3 osds: 3 up (since 12d),3 in (since 12d)
  data:
    pools:   1 pools,1 pgs
    objects: 2 objects,0 B
    usage:   3.3 GiB used,3.2 TiB / 3.2 TiB avail
    pgs:     2/6 objects degraded (33.333%)
             1 active+undersized+degraded

我认为是因为集群的健康状况是health_err,但我不知道如何解决……我目前使用原始分区来构建ceph集群:一个节点上一个分区,另一个节点上两个分区。

发现重启几次的pod很少,于是查看了他们的日志。至于csi-rbdplugin-provisioner pod,csi-resizer,csi attacher和csi-snapshotter容器都出现同样的错误:>

E0122 08:08:37.891106       1 leaderelection.go:321] error retrieving resource lock rook-ceph/external-resizer-rook-ceph-rbd-csi-ceph-com: Get "https://10.96.0.1:443/apis/coordination.k8s.io/v1/namespaces/rook-ceph/leases/external-resizer-rook-ceph-rbd-csi-ceph-com": dial tcp 10.96.0.1:443: I/O timeout

,以及 csi-snapshotter 中的重复错误

E0122 08:08:48.420082       1 reflector.go:127] github.com/kubernetes-csi/external-snapshotter/client/v3/informers/externalversions/factory.go:117: Failed to watch *v1beta1.VolumeSnapshotClass: Failed to list *v1beta1.VolumeSnapshotClass: the server Could not find the requested resource (get volumesnapshotclasses.snapshot.storage.k8s.io)

至于mgr pod,有重复记录:

debug 2021-01-29T00:47:22.155+0000 7f10fdb48700  0 log_channel(cluster) log [DBG] : pgmap v28775: 1 pgs: 1 active+undersized+degraded; 0 B data,337 MiB used,3.2 TiB / 3.2 TiB avail; 2/6 objects degraded (33.333%)

mon pods 的名字是 at、av 和 aw 而不是 a、b 和 c 也很奇怪。看起来 mon pods 被删除和创建了好几次,但我不知道为什么。

感谢您的建议。

解决方法

暂无找到可以解决该程序问题的有效方法,小编努力寻找整理中!

如果你已经找到好的解决方法,欢迎将解决方案带上本链接一起发送给小编。

小编邮箱:dio#foxmail.com (将#修改为@)