降低负载后Kubernetes HPA不会缩小

问题描述

当Pod的负载增加时,Kubernetes HPA可以正常工作,但是在负载减少之后,部署规模不会改变。这是我的HPA文件:

apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
  name: baseinformationmanagement
  namespace: default
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: baseinformationmanagement
  minReplicas: 1
  maxReplicas: 3
  metrics:
  - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 80
  - type: Resource
    resource:
      name: memory
      target:
        type: Utilization
        averageUtilization: 80

我的kubernetes版本:

> kubectl version
Client Version: version.Info{Major:"1",Minor:"16",GitVersion:"v1.16.1",GitCommit:"d647ddbd755faf07169599a625faf302ffc34458",GitTreeState:"clean",BuildDate:"2019-10-02T17:01:15Z",GoVersion:"go1.12.10",Compiler:"gc",Platform:"linux/amd64"}
Server Version: version.Info{Major:"1",Minor:"17",GitVersion:"v1.17.2",GitCommit:"59603c6e503c87169aea6106f57b9f242f64df89",BuildDate:"2020-01-18T23:22:30Z",GoVersion:"go1.13.5",Platform:"linux/amd64"}

这是我对HPA的描述:

> kubectl describe hpa baseinformationmanagement
Name:                                                     baseinformationmanagement
Namespace:                                                default
Labels:                                                   <none>
Annotations:                                              kubectl.kubernetes.io/last-applied-configuration:
                                                            {"apiVersion":"autoscaling/v2beta2","kind":"HorizontalPodAutoscaler","metadata":{"annotations":{},"name":"baseinformationmanagement","name...
CreationTimestamp:                                        Sun,27 Sep 2020 06:09:07 +0000
Reference:                                                Deployment/baseinformationmanagement
Metrics:                                                  ( current / target )
  resource memory on pods  (as a percentage of request):  49% (1337899008) / 70%
  resource cpu on pods  (as a percentage of request):     2% (13m) / 50%
Min replicas:                                             1
Max replicas:                                             3
Deployment pods:                                          2 current / 2 desired
Conditions:
  Type            Status  Reason              Message
  ----            ------  ------              -------
  AbleToScale     True    ReadyForNewScale    recommended size matches current size
  ScalingActive   True    ValidMetricFound    the HPA was able to successfully calculate a replica count from memory resource utilization (percentage of request)
  ScalingLimited  False   DesiredWithinRange  the desired count is within the acceptable range
Events:           <none>

解决方法

您的HPA同时指定内存和CPU目标。 Horizontal Pod Autoscaler文档说明:

如果在Horizo​​ntalPodAutoscaler中指定了多个指标,则将对每个指标进行此计算,然后选择所需的最大副本数。

实际副本目标是当前副本计数以及当前和目标利用率(相同链接)的函数:

desiredReplicas = ceil[currentReplicas * ( currentMetricValue / desiredMetricValue )]

特别是对于内存:currentReplicas是2; currentMetricValue是49; desiredMetricValue是80。所以目标副本数是

desiredReplicas = ceil[       2        * (         49        /         80         )]
desiredReplicas = ceil[       2        *                   0.61                    ]
desiredReplicas = ceil[                          1.26                              ]
desiredReplicas = 2

即使您的服务完全空闲,这也将导致(至少)有2个副本,除非该服务选择将内存释放回操作系统;通常这取决于语言运行时,并且有点不受您的控制。

仅删除内存目标并仅基于CPU自动缩放可能会更好地满足您的期望。

相关问答

依赖报错 idea导入项目后依赖报错,解决方案:https://blog....
错误1:代码生成器依赖和mybatis依赖冲突 启动项目时报错如下...
错误1:gradle项目控制台输出为乱码 # 解决方案:https://bl...
错误还原:在查询的过程中,传入的workType为0时,该条件不起...
报错如下,gcc版本太低 ^ server.c:5346:31: 错误:‘struct...