如何在k8s中的两个不同部署之间定义共享的持久卷?

问题描述

我试图在两个不同的部署之间以k8s定义一个共享的持久卷,但是我遇到了一些问题:

对于每个部署,每个部署之间都有2个Pod,我要配置一个共享卷-这意味着如果我在deplyment1 / pod1中创建一个txt文件,然后在deplyment1 / pod2中进行查看-我可以看不到文件。

第二个问题是,我看不到另一个部署(deplyment2)中的文件-当前正在发生的事情是每个pod都创建了自己独立的卷,而不是共享同一卷。

最后,我的目标是在Pod和部署之间创建共享卷。 重要的是要注意,我正在使用GKE。

以下是我当前的配置

部署1:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: app1
  namespace: test
spec:
  selector:
    matchLabels:
      app: app1
  template:
    metadata:
      labels:
        app: app1
    spec:
      containers:
        - name: server
          image: app1
          ports:
            - name: grpc
              containerPort: 11111
          resources:
            requests:
              cpu: 300m
            limits:
              cpu: 500m
          volumeMounts:
            - name: test
              mountPath: /etc/test/configs
      volumes:
        - name: test
          persistentVolumeClaim:
            claimName: my-claim

部署2:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: app2
      namespace: test
    spec:
      selector:
        matchLabels:
          app: app2
      template:
        metadata:
          labels:
            app: app2
        spec:
          containers:
            - name: server
              image: app2
              ports:
                - name: http
                  containerPort: 22222
              resources:
                requests:
                  cpu: 300m
                limits:
                  cpu: 500m
              volumeMounts:
                - name: test
                  mountPath: /etc/test/configs
          volumes:
            - name: test
              persistentVolumeClaim:
                claimName: my-claim

持久量:

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: test-pv
      namespace: test
    spec:
      capacity:
        storage: 5Gi
      volumeMode: Filesystem
      accessModes:
      - ReadWriteMany
      persistentVolumeReclaimPolicy: Retain
      storageClassName: fast
      local:
        path: /etc/test/configs
      nodeAffinity:
        required:
          nodeSelectorTerms:
          - matchExpressions:
            - key: cloud.google.com/gke-nodepool
              operator: In
              values:
              - default-pool
    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: my-claim
      namespace: test
      annotations:
        volume.beta.kubernetes.io/storage-class: fast
    spec:
      accessModes:
      - ReadWriteMany
      storageClassName: fast
      resources:
        requests:
          storage: 5Gi
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: fast
    provisioner: kubernetes.io/gce-pd
    parameters:
      type: pd-ssd
      fstype: ext4
      replication-type: regional-pd

并描述pv和pvc:

     $ kubectl describe pvc -n test
        Name:          my-claim
        Namespace:     test
        StorageClass:  fast
        Status:        Bound
        Volume:        test-pv
        Labels:        <none>
        Annotations:   pv.kubernetes.io/bind-completed: yes
                       pv.kubernetes.io/bound-by-controller: yes
                       volume.beta.kubernetes.io/storage-class: fast
        Finalizers:    [kubernetes.io/pvc-protection]
        Capacity:      5Gi
        Access Modes:  RWX
        VolumeMode:    Filesystem
        Mounted By:    <none>
        Events:        <none>
      $ kubectl describe pv -n test
        Name:              test-pv
        Labels:            <none>
        Annotations:       pv.kubernetes.io/bound-by-controller: yes
        Finalizers:        [kubernetes.io/pv-protection]
        StorageClass:      fast
        Status:            Bound
        Claim:             test/my-claim
        Reclaim Policy:    Retain
        Access Modes:      RWX
        VolumeMode:        Filesystem
        Capacity:          5Gi
        Node Affinity:
          Required Terms:
            Term 0:        cloud.google.com/gke-nodepool in [default-pool]
        Message:
        Source:
            Type:  LocalVolume (a persistent volume backed by local storage on a node)
            Path:  /etc/test/configs
        Events:    <none>

解决方法

GCE-PD CSI存储驱动程序不支持ReadWriteMany。您需要使用ReadOnlyMany。对于ReadWriteMany,您需要使用GFS挂载。

docs开始,如何使用具有多个读取器的永久性磁盘

创建PersistentVolumePersistentVolumeClaim

apiVersion: v1
kind: PersistentVolume
metadata:
  name: my-readonly-pv
spec:
  storageClassName: ""
  capacity:
    storage: 10Gi
  accessModes:
    - ReadOnlyMany
  claimRef:
    namespace: default
    name: my-readonly-pvc
  gcePersistentDisk:
    pdName: my-test-disk
    fsType: ext4
    readOnly: true
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-readonly-pvc
spec:
  # Specify "" as the storageClassName so it matches the PersistentVolume's StorageClass.
  # A nil storageClassName value uses the default StorageClass. For details,see
  # https://kubernetes.io/docs/concepts/storage/persistent-volumes/#class-1
  storageClassName: ""
  accessModes:
    - ReadOnlyMany
  resources:
    requests:
      storage: 10Gi

在Pod中使用PersistentVolumeClaim

apiVersion: v1
kind: Pod
metadata:
  name: pod-pvc
spec:
  containers:
  - image: k8s.gcr.io/busybox
    name: busybox
    command:
      - "sleep"
      - "3600"
    volumeMounts:
    - mountPath: /test-mnt
      name: my-volume
      readOnly: true
  volumes:
  - name: my-volume
    persistentVolumeClaim:
      claimName: my-readonly-pvc
      readOnly: true

现在,您可以在不同的节点上拥有多个Pod,这些Pod都可以以只读模式安装此PersistentVolumeClaim。但是,您不能在多个节点上同时以写入模式连接永久磁盘

相关问答

错误1:Request method ‘DELETE‘ not supported 错误还原:...
错误1:启动docker镜像时报错:Error response from daemon:...
错误1:private field ‘xxx‘ is never assigned 按Alt...
报错如下,通过源不能下载,最后警告pip需升级版本 Requirem...