问题描述
这里是 k8s 新手。
StatefulSets 允许创建具有 a) 预定义名称和 b) 顺序的 pod。就我而言,我不需要订单 (b),这给我带来了麻烦。 (a) 在我的情况下很有用,因为如果容器死亡,我需要保持状态。
例如,我有 [ pod-0,pod-1,pod-2 ],只想让 pod-0 死掉,但结果是这样的:
这是预期的:
1. [ pod-0:Running,pod-1:Running,pod-2:Running ]
2. My app needs to scale to 2 replicas by killing pod-0,so "k delete pod/pod-0" and "Replicas: 2"
3. [ pod-0:terminating,pod-2:Running ]
我要保持这个状态!
4. [ pod-1:Running,pod-2:Running ]
这个,我不想要!!!,但是不能阻止K8s做:
5. [ pod-0:Starting,pod-2:Running ] (K8s shifts the pipe!!!)
6. [ pod-0:Running,pod-2:Terminating ] (K8s shifts the pipe!!!)
7. [ pod-0:Running,pod-1:Running ] (K8s shifts the pipe!!!)
如何使用 K8s 实现所需的行为(保留一组非顺序命名的 pod)?
我已经看到 Openkruise.ui 提供的一个很有前途的“AdvancedStatefulSet”(1),它可以实现这一点,但该产品尚未成熟,无法用于生产。至少,它不适用于 minikube(minikube 1.16.0、docker 19.03.13、OpenKruise 0.7.0)。
有人要我的部署文件,这里是:
kind: StatefulSet
apiVersion: apps/v1
Metadata:
name: contextcf
labels:
name: contextcf
spec:
serviceName: contextcf
selector:
matchLabels:
name: contextcf
replicas: 3
template:
Metadata:
labels:
name: contextcf
spec:
containers:
- name: contextcf
image: (my-registry)/contextcf:1.0.0
ports:
- name: web
containerPort: 80
# Volume sections removed,no issues there. The application is a simple as this.
解决方法
你能附上你的 YAML 文件吗?
我有 [ pod-0,pod-1,pod-2 ],只是想让 pod-0 死掉,但是这个 是怎么回事
我无法用最简单的 StatefulSet 重现这个
$ cat sts.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
app: nginx # has to match .spec.template.metadata.labels
serviceName: "nginx"
replicas: 3 # by default is 1
template:
metadata:
labels:
app: nginx # has to match .spec.selector.matchLabels
spec:
terminationGracePeriodSeconds: 10
containers:
- name: nginx
image: k8s.gcr.io/nginx-slim:0.8
ports:
- containerPort: 80
name: web
由 StatefulSet 控制的新创建的 pod
$ k -n test2 get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
web-0 1/1 Running 0 19s 10.8.252.144 k8s-vm04 <none> <none>
web-1 1/1 Running 0 12s 10.8.252.76 k8s-vm03 <none> <none>
web-2 1/1 Running 0 6s 10.8.253.8 k8s-vm02 <none> <none>
尝试删除 web-0
$ k -n test2 delete pod web-0
pod "web-0" deleted
web-0 处于终止状态
$ k -n test2 get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
web-0 0/1 Terminating 0 47s 10.8.252.144 k8s-vm04 <none> <none>
web-1 1/1 Running 0 40s 10.8.252.76 k8s-vm03 <none> <none>
web-2 1/1 Running 0 34s 10.8.253.8 k8s-vm02 <none> <none>
web-0 处于创建状态
$ k -n test2 get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
web-0 0/1 ContainerCreating 0 1s <none> k8s-vm04 <none> <none>
web-1 1/1 Running 0 45s 10.8.252.76 k8s-vm03 <none> <none>
web-2 1/1 Running 0 39s 10.8.253.8 k8s-vm02 <none> <none>
所有 Pod 都在运行
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
web-0 1/1 Running 0 1m21s 10.8.252.145 k8s-vm04 <none> <none>
web-1 1/1 Running 0 2m59s 10.8.252.76 k8s-vm03 <none> <none>
web-2 1/1 Running 0 2m5s 10.8.253.8 k8s-vm02 <none> <none>
其他 Pod 仍在运行且未处于终止状态
如果您谈论缩放 StatefulSet,此 statefulset.spec.podManagementPolicy
可能对您有所帮助
$ k explain statefulset.spec.podManagementPolicy
KIND: StatefulSet
VERSION: apps/v1
FIELD: podManagementPolicy <string>
DESCRIPTION:
podManagementPolicy controls how pods are created during initial scale up,when replacing pods on nodes,or when scaling down. The default policy is
`OrderedReady`,where pods are created in increasing order (pod-0,then
pod-1,etc) and the controller will wait until each pod is ready before
continuing. When scaling down,the pods are removed in the opposite order.
The alternative policy is `Parallel` which will create pods in parallel to
match the desired scale without waiting,and on scale down will delete all
pods at once.