为k8s etcd pod配置pod资源请求

问题描述

在同时运行k8s 1.18和认的“ on cluster” etcd pod部署时,分配资源(cpu /内存)请求或影响etcd容器的pod规范的方式是什么?

认配置不提供资源请求或限制。

  Namespace                   Name                                                     cpu Requests  cpu Limits  Memory Requests  Memory Limits  AGE
  ---------                   ----                                                     ------------  ----------  ---------------  -------------  ---
 kube-system                 etcd-172-25-87-82-hybrid.com                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         77m

我知道人们如何通过kubeadm extraArgs配置将额外的args传递给etcd,但是这些并没有涵盖etcd的pod资源。

etcd:
  local:
    extraArgs:
      heartbeat-interval: "1000"
      election-timeout: "5000"

该问题可以扩展到kube系统名称空间中的其他资源,例如coredns等。

解决方法

init cluster之后,您可以找到生成的/etc/kubernetes/manifests/etcd.yaml。试图编辑吗? kubelet应该选择所做的更改,然后重新启动etcd实例。

root@kube-1:~# cat /etc/kubernetes/manifests/etcd.yaml
apiVersion: v1
kind: Pod
metadata:
  annotations:
    kubeadm.kubernetes.io/etcd.advertise-client-urls: https://10.154.0.33:2379
  creationTimestamp: null
  labels:
    component: etcd
    tier: control-plane
  name: etcd
  namespace: kube-system
spec:
  containers:
  - command:
    - etcd
    - --advertise-client-urls=https://10.154.0.33:2379
    - --cert-file=/etc/kubernetes/pki/etcd/server.crt
    - --client-cert-auth=true
    - --data-dir=/var/lib/etcd
    - --initial-advertise-peer-urls=https://10.154.0.33:2380
    - --initial-cluster=kube-1=https://10.154.0.33:2380
    - --key-file=/etc/kubernetes/pki/etcd/server.key
    - --listen-client-urls=https://127.0.0.1:2379,https://10.154.0.33:2379
    - --listen-metrics-urls=http://127.0.0.1:2381
    - --listen-peer-urls=https://10.154.0.33:2380
    - --name=kube-1
    - --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt
    - --peer-client-cert-auth=true
    - --peer-key-file=/etc/kubernetes/pki/etcd/peer.key
    - --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
    - --snapshot-count=10000
    - --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
    image: k8s.gcr.io/etcd:3.4.13-0
    imagePullPolicy: IfNotPresent
    livenessProbe:
      failureThreshold: 8
      httpGet:
        host: 127.0.0.1
        path: /health
        port: 2381
        scheme: HTTP
      initialDelaySeconds: 10
      periodSeconds: 10
      timeoutSeconds: 15
    name: etcd
    resources: {}
    startupProbe:
      failureThreshold: 24
      httpGet:
        host: 127.0.0.1
        path: /health
        port: 2381
        scheme: HTTP
      initialDelaySeconds: 10
      periodSeconds: 10
      timeoutSeconds: 15
    volumeMounts:
    - mountPath: /var/lib/etcd
      name: etcd-data
    - mountPath: /etc/kubernetes/pki/etcd
      name: etcd-certs
  hostNetwork: true
  priorityClassName: system-node-critical
  volumes:
  - hostPath:
      path: /etc/kubernetes/pki/etcd
      type: DirectoryOrCreate
    name: etcd-certs
  - hostPath:
      path: /var/lib/etcd
      type: DirectoryOrCreate
    name: etcd-data
status: {}