Minikube错误指出找不到绑定的PVC

问题描述

应用资源文件时,窗格会报告未找到持久卷声明“ pm-pv-claim”。但是,pvc存在并绑定。为什么pod会说找不到pvc?

许多论坛上的帖子都建议该问题与Minikube如何处理存储有关,但在那里提出的解决方案建议明确声明PV对我不起作用。其他帖子的pvc和pv处于未绑定状态,这不适用于此处。

详细信息:

cmdline ~> kubectl describe pod pimgr-5df8f859d-z8xmz
Name:         pimgr-5df8f859d-z8xmz
Namespace:    default
Priority:     0
Node:         m01/192.168.99.10
Start Time:   Sun,30 Aug 2020 22:00:24 +0100
Labels:       app=pimgr
              pod-template-hash=5df8f8569d
              tier=frontend
Annotations:  <none>
Status:       Running
IP:           172.17.0.6
IPs:
  IP:           172.17.0.6
Controlled By:  replicaset/pimgr-5df8f8569d
Containers:
  pimgr:
    Container ID:   docker://5ca322e180283135d5b160a54ab0a46404381951d2b1ffa6c034df7c85f638
    Image:          pimgr/django:0.1
    Image ID:       docker://sha256:23abfa4de71126724c1fff6bda67ff8c093f903086718541be61f0be4097b3
    Port:           8010/TCP
    Host Port:      0/TCP
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Sun,30 Aug 2020 22:01:08 +0100
      Finished:     Sun,30 Aug 2020 22:01:08 +0100
    Last State:     Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Sun,30 Aug 2020 22:00:38 +0100
      Finished:     Sun,30 Aug 2020 22:00:38 +0100
    Ready:          False
    Restart Count:  3
    Environment:    <none>
    Mounts:
      /code from pimgr-persistent-storage (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-llll4 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  pimgr-persistent-storage:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  pm-pv-claim
    ReadOnly:   false
  default-token-llll4:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-llll4
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age               From               Message
  ----     ------            ----              ----               -------
  Warning  FailedScheduling  <unkNown>         default-scheduler  persistentvolumeclaim "pm-pv-claim" not found
  Warning  FailedScheduling  <unkNown>         default-scheduler  error while running "VolumeBinding" filter plugin for pod "pimgr-5df8f8569d-z8xmz": pod has unbound immediate PersistentVolumeClaims
  Warning  FailedScheduling  <unkNown>         default-scheduler  error while running "VolumeBinding" filter plugin for pod "pimgr-5df8f8569d-z8xmz": pod has unbound immediate PersistentVolumeClaims
  normal   Scheduled         <unkNown>         default-scheduler  Successfully assigned default/pimgr-5df8f8569d-z8xmz to m01
  normal   Pulled            1s (x4 over 45s)  kubelet,m01       Container image "pimgr/django:0.1" already present on machine
  normal   Created           1s (x4 over 45s)  kubelet,m01       Created container pimgr
  normal   Started           1s (x4 over 45s)  kubelet,m01       Started container pimgr
  Warning  BackOff           0s (x5 over 43s)  kubelet,m01       Back-off restarting Failed container

resource.yaml

apiVersion: v1
kind: Service
Metadata:
  name: pimgr
  labels:
    app: pimgr
spec:
  ports:
    - port: 8010
  selector:
    app: pimgr
    tier: frontend
  type: LoadBalancer
---
apiVersion: v1
kind: PersistentVolumeClaim
Metadata:
  name: pm-pv-claim
  labels:
    app: pimgr
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 4Gi
---
apiVersion: v1
kind: PersistentVolume
Metadata:
  name: pv0001
spec:
  accessModes:
    - ReadWriteOnce
  capacity:
    storage: 5Gi
  hostPath:
    path: /data/pv0001/
  storageClassName: standard
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
Metadata:
  name: pimgr
  labels:
    app: pimgr
spec:
  selector:
    matchLabels:
      app: pimgr
      tier: frontend
  strategy:
    type: Recreate
  template:
    Metadata:
      labels:
        app: pimgr
        tier: frontend
    spec:
      containers:
      - image: pimgr/django:0.1
        name: pimgr
        ports:
        - containerPort: 8010
          name: pimgr
        volumeMounts:
        - name: pimgr-persistent-storage
          mountPath: /code
      volumes:
      - name: pimgr-persistent-storage
        persistentVolumeClaim:
          claimName: pm-pv-claim
cmdline ~> kubectl get pods
NAME                          READY   STATUS             RESTARTS   AGE
pimgr-5df8f8569d-z8xmz        0/1     CrashLoopBackOff   2          36s
cmdline> kubectl get pvc
NAME                STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
pm-pv-claim         Bound    pv0001                                     5Gi        RWO            standard       20m
cmdline> minikube version
minikube version: v1.8.2
commit: eb13446e786c9ef70cb0a9f85a633194e62396a1
cmdline ~> kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLaim POLICY   STATUS   CLaim                       STORAGECLASS   REASON   AGE
pv0001                                     5Gi        RWO            Retain           Bound    default/pm-pv-claim         standard                13h
cmdline > minikube addons list
|-----------------------------|----------|--------------|
|         ADDON NAME          | PROFILE  |    STATUS    |
|-----------------------------|----------|--------------|
| dashboard                   | minikube | disabled     |
| default-storageclass        | minikube | enabled ✅   |
| efk                         | minikube | disabled     |
| freshpod                    | minikube | disabled     |
| gvisor                      | minikube | disabled     |
| helm-tiller                 | minikube | disabled     |
| ingress                     | minikube | disabled     |
| ingress-dns                 | minikube | disabled     |
| istio                       | minikube | disabled     |
| istio-provisioner           | minikube | disabled     |
| logviewer                   | minikube | disabled     |
| metrics-server              | minikube | disabled     |
| nvidia-driver-installer     | minikube | disabled     |
| nvidia-gpu-device-plugin    | minikube | disabled     |
| registry                    | minikube | disabled     |
| registry-creds              | minikube | disabled     |
| storage-provisioner         | minikube | enabled ✅   |
| storage-provisioner-gluster | minikube | disabled     |
|-----------------------------|----------|--------------|
cmdline > kubectl get storageclass
NAME                 PROVISIONER                RECLaimPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
standard (default)   k8s.io/minikube-hostpath   Delete          Immediate           false                  91d

更新

似乎绑定确实成功了:我将规范/容器修改为:

      containers:
      - image: pimgr/django:0.1
        name: pimgr
        command: ["ls"]
        args: ["-l","/code1"]
        ports:
        - containerPort: 8010
          name: pimgr
        volumeMounts:
        - name: pimgr-persistent-storage
          mountPath: /code1

,然后通过以下方式在minikube vm中添加文件

$ sudo touch /data/pv0001/hi
$ ls /data/pv0001/
hi

应用我看到的资源后:

cmdline ~> kubectl get pods
NAME                          READY   STATUS             RESTARTS   AGE
pimgr-55696d6dc4-gz74c        0/1     Completed          0          4s
cmdline ~> kubectl logs pimgr-55696d6dc4-gz74c
total 0
-rw-r--r-- 1 root root 0 Aug 31 15:06 hi

但是仍然存在未找到的错误

cmdline ~> kubectl describe pod pimgr-55696d6dc4-gz74c
Name:         pimgr-55696d6dc4-gz74c
Namespace:    default
Priority:     0
Node:         m01/192.168.99.103
Start Time:   Mon,31 Aug 2020 16:11:38 +0100
Labels:       app=pimgr
              pod-template-hash=55696d6dc4
              tier=frontend
Annotations:  <none>
Status:       Running
IP:           172.17.0.6
IPs:
  IP:           172.17.0.6
Controlled By:  replicaset/pimgr-55696d6dc4
Containers:
  pimgr:
    Container ID:  docker://4f4ceb807eb4d97e69a6af2da853561ca6c82006de545c68b7d2013da2aacc21
    Image:         pimgr/django:0.1
    Image ID:      docker://sha256:23abfa4de71126724c1fff6bda67ff8c093f903086718541be61f0be4097b3ee
    Port:          8010/TCP
    Host Port:     0/TCP
    Command:
      ls
    Args:
      -l
      /code1
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Mon,31 Aug 2020 16:11:52 +0100
      Finished:     Mon,31 Aug 2020 16:11:52 +0100
    Ready:          False
    Restart Count:  2
    Environment:    <none>
    Mounts:
      /code1 from pimgr-persistent-storage (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-llll4 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  pimgr-persistent-storage:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  pm-pv-claim
    ReadOnly:   false
  default-token-llll4:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-llll4
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age                From               Message
  ----     ------            ----               ----               -------
  Warning  FailedScheduling  34s (x2 over 34s)  default-scheduler  persistentvolumeclaim "pm-pv-claim" not found
  normal   Scheduled         32s                default-scheduler  Successfully assigned default/pimgr-55696d6dc4-gz74c to m01
  normal   Pulled            18s (x3 over 31s)  kubelet,m01       Container image "pimgr/django:0.1" already present on machine
  normal   Created           18s (x3 over 31s)  kubelet,m01       Created container pimgr
  normal   Started           18s (x3 over 31s)  kubelet,m01       Started container pimgr
  Warning  BackOff           6s (x4 over 29s)   kubelet,m01       Back-off restarting Failed container

因此它似乎已正确绑定,但我最初的问题仍然存在,为什么它报告未找到它?这是红鲱鱼还是其他问题?

解决方法

暂无找到可以解决该程序问题的有效方法,小编努力寻找整理中!

如果你已经找到好的解决方法,欢迎将解决方案带上本链接一起发送给小编。

小编邮箱:dio#foxmail.com (将#修改为@)