Liveness Probe、Readiness Probe 未在预期持续时间内调用

问题描述

在 GKE 上,我尝试使用就绪探针/活性探针,并使用监控 https://cloud.google.com/monitoring/alerts/using-alerting-ui

作为测试,我创建了一个具有就绪探针/活性探针的 pod。正如我所料,每次探测检查都失败了。

apiVersion: v1
kind: Pod
Metadata:
  labels:
    test: liveness
  name: liveness-http
spec:
  containers:
  - name: liveness
    image: k8s.gcr.io/liveness
    args:
    - /server
    readinessProbe:
      httpGet:
        path: /healthz
        port: 8080
        httpHeaders:
        - name: X-Custom-Header
          value: Awesome
      initialDelaySeconds: 0
      periodSeconds: 10      
      timeoutSeconds: 10
      successthreshold: 1
      failureThreshold: 3
    livenessProbe:
      httpGet:
        path: /healthz
        port: 8080
        httpHeaders:
        - name: X-Custom-Header
          value: Awesome
      initialDelaySeconds: 20
      periodSeconds: 60
      timeoutSeconds: 30      
      successthreshold: 1
      failureThreshold: 3 

并检查 GCP 日志,首先显示了基于 periodSeconds 的两个错误日志。

就绪探测:每 10 秒一次

2021-02-21 13:26:30.000 日本标准时间 就绪探测失败:HTTP 探测失败,状态码:500

2021-02-21 13:26:40.000 日本标准时间 就绪探测失败:HTTP 探测失败,状态码:500

活性探测:每 1 分钟

2021-02-21 13:25:40.000 日本标准时间 活性探测失败:HTTP 探测失败,状态码:500

2021-02-21 13:26:40.000 日本标准时间 活性探测失败:HTTP 探测失败,状态码:500

但是,在运行这个 pod 几分钟后

  • 不再调用活性探测检查
  • 调用了就绪探测检查,但间隔变长(最大间隔看起来大约 10 分钟)
$ kubectl get event
LAST SEEN   TYPE      REASON      OBJECT              MESSAGE
30m         normal    Pulling     pod/liveness-http   Pulling image "k8s.gcr.io/liveness"
25m         Warning   Unhealthy   pod/liveness-http   Readiness probe Failed: HTTP probe Failed with statuscode: 500
20m         Warning   BackOff     pod/liveness-http   Back-off restarting Failed container
20m         normal    Scheduled   pod/liveness-http   Successfully assigned default/liveness-http to gke-cluster-default-pool-8bc9c75c-rfgc
17m         normal    Pulling     pod/liveness-http   Pulling image "k8s.gcr.io/liveness"
17m         normal    Pulled      pod/liveness-http   Successfully pulled image "k8s.gcr.io/liveness"
17m         normal    Created     pod/liveness-http   Created container liveness
20m         normal    Started     pod/liveness-http   Started container liveness
4m59s       Warning   Unhealthy   pod/liveness-http   Readiness probe Failed: HTTP probe Failed with statuscode: 500
17m         Warning   Unhealthy   pod/liveness-http   Liveness probe Failed: HTTP probe Failed with statuscode: 500
17m         normal    Killing     pod/liveness-http   Container liveness Failed liveness probe,will be restarted

在我的计划中,我将创建警报策略,其条件类似于

  • 如果活性探测错误在 3 分钟内发生 3 次

但是如果探测检查没有像我预期的那样调用,这些策略就不起作用;即使 pod 没有运行,警报也已修复


为什么 Liveness 探针没有运行,Readiness 探针的时间间隔发生了变化?

注意:如果有其他好的警报策略来检查 pod 的活跃度,我不会关心这种行为。如果有人建议我哪种警报策略最适合检查 Pod,我将不胜感激。

解决方法

背景

Configure Liveness,Readiness and Startup Probes 文档中,您可以找到信息:

kubelet 使用 liveness probes 来知道何时重启容器。例如,活性探测器可能会捕捉到一个应用程序正在运行但无法取得进展的死锁。在这种状态下重新启动容器有助于提高应用程序的可用性,尽管存在错误。

kubelet 使用 readiness probes 来了解容器何时准备好开始接受流量。当 Pod 的所有容器都准备就绪时,它就被认为是准备好了的。此信号的一个用途是控制哪些 Pod 用作服务的后端。当 Pod 没有准备好时,它会从服务负载均衡器中移除。

由于 GKE master 是由 google 管理的,所以您不会使用 kubelet 找到 CLI 日志(您可以尝试使用 Stackdriver)。我已经在 Kubeadm 集群上对其进行了测试,并将 verbosity 级别设置为 8

当您使用 $ kubectl get events 时,您只能获取最近一小时的事件(可以在 Kubernetes 设置中更改 - Kubeadm 但我认为不能在 {{1} } 因为 master 是由 google 管理的。)

GKE

$ kubectl get events LAST SEEN TYPE REASON OBJECT MESSAGE 37m Normal Starting node/kubeadm Starting kubelet. ... 33m Normal Scheduled pod/liveness-http Successfully assigned default/liveness-http to kubeadm 33m Normal Pulling pod/liveness-http Pulling image "k8s.gcr.io/liveness" 33m Normal Pulled pod/liveness-http Successfully pulled image "k8s.gcr.io/liveness" in 893.953679ms 33m Normal Created pod/liveness-http Created container liveness 33m Normal Started pod/liveness-http Started container liveness 3m12s Warning Unhealthy pod/liveness-http Readiness probe failed: HTTP probe failed with statuscode: 500 30m Warning Unhealthy pod/liveness-http Liveness probe failed: HTTP probe failed with statuscode: 500 8m17s Warning BackOff pod/liveness-http Back-off restarting failed container 之后再次执行相同的命令。

~1 hour

测试

超过一小时每 10 秒执行一次 $ kubectl get events LAST SEEN TYPE REASON OBJECT MESSAGE 33s Normal Pulling pod/liveness-http Pulling image "k8s.gcr.io/liveness" 5m40s Warning Unhealthy pod/liveness-http Readiness probe failed: HTTP probe failed with statuscode: 500 15m Warning BackOff pod/liveness-http Back-off restarting failed container 检查。

Readiness Probe

此外,还有 Mar 09 14:48:34 kubeadm kubelet[3855]: I0309 14:48:34.222085 3855 prober.go:117] Readiness probe for "liveness-http_default(8c87a08e-34aa-4bb1-be9b-fdca39a4562a):liveness" failed (failure): HTTP probe failed with statuscode: 500 Mar 09 14:48:44 kubeadm kubelet[3855]: I0309 14:48:44.221782 3855 prober.go:117] Readiness probe for "liveness-http_default(8c87a08e-34aa-4bb1-be9b-fdca39a4562a):liveness" failed (failure): HTTP probe failed with statuscode: 500 Mar 09 14:48:54 kubeadm kubelet[3855]: I0309 14:48:54.221828 3855 prober.go:117] Readiness probe for "liveness-http_default(8c87a08e-34aa-4bb1-be9b-fdca39a4562a):liveness" failed (failure): HTTP probe failed with statuscode: 500 ... Mar 09 15:01:34 kubeadm kubelet[3855]: I0309 15:01:34.222491 3855 prober.go:117] Readiness probe for "liveness-http_default(8c87a08e-34aa-4bb1-be9b-fdca39a4 562a):liveness" failed (failure): HTTP probe failed with statuscode: 500 Mar 09 15:01:44 kubeadm kubelet[3855]: I0309 15:01:44.221877 3855 prober.go:117] Readiness probe for "liveness-http_default(8c87a08e-34aa-4bb1-be9b-fdca39a4562a):liveness" failed (failure): HTTP probe failed with statuscode: 500 Mar 09 15:01:54 kubeadm kubelet[3855]: I0309 15:01:54.221976 3855 prober.go:117] Readiness probe for "liveness-http_default(8c87a08e-34aa-4bb1-be9b-fdca39a4562a):liveness" failed (failure): HTTP probe failed with statuscode: 500 ... Mar 09 15:10:14 kubeadm kubelet[3855]: I0309 15:10:14.222163 3855 prober.go:117] Readiness probe for "liveness-http_default(8c87a08e-34aa-4bb1-be9b-fdca39a4562a):liveness" failed (failure): HTTP probe failed with statuscode: 500 Mar 09 15:10:24 kubeadm kubelet[3855]: I0309 15:10:24.221744 3855 prober.go:117] Readiness probe for "liveness-http_default(8c87a08e-34aa-4bb1-be9b-fdca39a4562a):liveness" failed (failure): HTTP probe failed with statuscode: 500 Mar 09 15:10:34 kubeadm kubelet[3855]: I0309 15:10:34.223877 3855 prober.go:117] Readiness probe for "liveness-http_default(8c87a08e-34aa-4bb1-be9b-fdca39a4562a):liveness" failed (failure): HTTP probe failed with statuscode: 500 ... Mar 09 16:04:14 kubeadm kubelet[3855]: I0309 16:04:14.222853 3855 prober.go:117] Readiness probe for "liveness-http_default(8c87a08e-34aa-4bb1-be9b-fdca39a4562a):liveness" failed (failure): HTTP probe failed with statuscode: 500 Mar 09 16:04:24 kubeadm kubelet[3855]: I0309 16:04:24.222531 3855 prober.go:117] Readiness probe for "liveness-http_default(8c87a08e-34aa-4bb1-be9b-fdca39a4562a):liveness" failed (failure): HTTP probe failed with statuscode: 500 个条目。

Liveness probe

测试总时间:

Mar 09 16:12:58 kubeadm kubelet[3855]: I0309 16:12:58.462878    3855 prober.go:117] Liveness probe for "liveness-http_default(8c87a08e-34aa-4bb1-be9b-fdca39a4562a):liveness" failed (failure): HTTP probe failed with statuscode: 500
Mar 09 16:13:58 kubeadm kubelet[3855]: I0309 16:13:58.462906    3855 prober.go:117] Liveness probe for "liveness-http_default(8c87a08e-34aa-4bb1-be9b-fdca39a4562a):liveness" failed (failure): HTTP probe failed with statuscode: 500
Mar 09 16:14:58 kubeadm kubelet[3855]: I0309 16:14:58.465470    3855 kuberuntime_manager.go:656] Container "liveness" ({"docker" "95567f85708ffac8b34b6c6f2bdb4
9d8eb57e7704b7b416083c7f296dd40cd0b"}) of pod liveness-http_default(8c87a08e-34aa-4bb1-be9b-fdca39a4562a): Container liveness failed liveness probe,will be restarted
Mar 09 16:14:58 kubeadm kubelet[3855]: I0309 16:14:58.465587    3855 kuberuntime_manager.go:712] Killing unwanted container "liveness"(id={"docker" "95567f85708ffac8b34b6c6f2bdb49d8eb57e7704b7b416083c7f296dd40cd0b"}) for pod "liveness-http_default(8c87a08e-34aa-4bb1-be9b-fdca39a4562a)"

结论

不再调用活性探测检查

$ kubectl get po -w NAME READY STATUS RESTARTS AGE liveness-http 0/1 Running 21 99m liveness-http 0/1 CrashLoopBackOff 21 101m liveness-http 0/1 Running 22 106m liveness-http 1/1 Running 22 106m liveness-http 0/1 Running 22 106m liveness-http 0/1 Running 23 109m liveness-http 1/1 Running 23 109m liveness-http 0/1 Running 23 109m liveness-http 0/1 CrashLoopBackOff 23 112m liveness-http 0/1 Running 24 117m liveness-http 1/1 Running 24 117m liveness-http 0/1 Running 24 117m 在 Kubernetes 创建 pod 时创建,并在每次重新启动 Liveness check 时重新创建。在您的配置中,您已设置 Pod,因此在创建 initialDelaySeconds: 20 后,Kubernetes 将等待 20 秒,然后它将调用 pod 探测器 3 次(如您已设置的 liveness) . 3次失败后,Kubernetes会根据failureThreshold: 3重启这个pod。同样在日志中,您将能够在日志中找到:

RestartPolicy

为什么要重启?答案可以在 Container probes 中找到。

Mar 09 16:14:58 kubeadm kubelet[3855]: I0309 16:14:58.465470 3855 kuberuntime_manager.go:656] Container "liveness" ({"docker" "95567f85708ffac8b34b6c6f2bdb4 9d8eb57e7704b7b416083c7f296dd40cd0b"}) of pod liveness-http_default(8c87a08e-34aa-4bb1-be9b-fdca39a4562a): Container liveness failed liveness probe,will be restarted 指示容器是否正在运行。如果 liveness 探测失败,kubelet 会杀死容器,并且容器会受到其重启策略的约束。

livenessProbe: 中的默认 Restart PolicyGKE。因此,您的 Pod 将一遍又一遍地重新启动。

已调用就绪探针检查,但间隔变长(最大间隔看起来约为 10 分钟)

我认为您已经根据 Always$ kubectl get events 得出了这个结论。在这两种情况下,默认事件都会在 1 小时后删除。在我的 $ kubectl describe po 部分,您可以看到 Tests 条目从 Readiness probe14:48:34,因此 Kubernetes 每 10 秒调用一次 16:04:24

为什么 Liveness 探针没有运行,Readiness 探针的时间间隔发生了变化?

正如我在 Readiness Probe 部分向您展示的那样,Tests 没有改变。在这种情况下误导使用Readiness probe。关于 $ kubectl events,它仍在调用,但在 pod 之后只有 3 次是 Liveiness Probe/created。我还包含了 restarted 的输出。重新创建 $ kubectl get po -w 后,您可能会在 kubelet 日志中发现那些 pod

在我的计划中,我将创建警报策略,其条件如下:

  • 如果活性探测错误在 3 分钟内发生 3 次

如果 liveness probes 失败 3 次,使用您当前的设置,它将重新启动此 pod。在这种情况下,您可以使用每个 liveness probe 创建一个 restart

alert

您可以在 Stackoverflow 案例中找到一些关于 Metric: kubernetes.io/container/restart_count Resource type: k8s_container 的有用信息,例如: