使用 Kubernetes CronJob 基于时间的扩展:如何避免部署覆盖 minReplicas 使用 helm 通过查找功能管理应用程序的生命周期:将 minReplicas 资源单独管理到应用程序清单文件

问题描述

我有一个 Horizo​​ntalPodAutoscalar 来根据 cpu 扩展我的 pod。这里的 minReplicas 设置为 5:

apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
Metadata:
  name: myapp-web
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: myapp-web
  minReplicas: 5 
  maxReplicas: 10
  metrics:
    - type: Resource
      resource:
        name: cpu
        target:
          type: utilization
          averageutilization: 50

然后我添加了 Cron 作业以根据一天中的时间扩展/缩小我的水平 Pod 自动缩放器:

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
Metadata:
  namespace: production
  name: cron-runner
rules:
- apiGroups: ["autoscaling"]
  resources: ["horizontalpodautoscalers"]
  verbs: ["patch","get"]

---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
Metadata:
  name: cron-runner
  namespace: production
subjects:
- kind: ServiceAccount
  name: sa-cron-runner
  namespace: production
roleRef:
  kind: Role
  name: cron-runner
  apiGroup: rbac.authorization.k8s.io

---
apiVersion: v1
kind: ServiceAccount
Metadata:
  name: sa-cron-runner
  namespace: production
---

apiVersion: batch/v1beta1
kind: CronJob
Metadata:
  name: django-scale-up-job
  namespace: production
spec:
  schedule: "56 11 * * 1-6"
  successfulJobsHistoryLimit: 0 # Remove after successful completion
  FailedJobsHistoryLimit: 1 # Retain Failed so that we see it
  concurrencyPolicy: Forbid
  jobTemplate:
    spec:
      template:
        spec:
          serviceAccountName: sa-cron-runner
          containers:
          - name: django-scale-up-job
            image: bitnami/kubectl:latest
            command:
            - /bin/sh
            - -c
            - kubectl patch hpa myapp-web --patch '{"spec":{"minReplicas":8}}'
          restartPolicy: OnFailure
----
apiVersion: batch/v1beta1
kind: CronJob
Metadata:
  name: django-scale-down-job
  namespace: production
spec:
  schedule: "30 20 * * 1-6"
  concurrencyPolicy: Forbid
  successfulJobsHistoryLimit: 0 # Remove after successful completion
  FailedJobsHistoryLimit: 1 # Retain Failed so that we see it
  jobTemplate:
    spec:
      template:
        spec:
          serviceAccountName: sa-cron-runner
          containers:
          - name: django-scale-down-job
            image: bitnami/kubectl:latest
            command:
            - /bin/sh
            - -c
            - kubectl patch hpa myapp-web --patch '{"spec":{"minReplicas":5}}'
          restartPolicy: OnFailure

这真的很好用,只是现在当我部署它时会用 Horizo​​ntalPodAutoscaler 规范中的 minReplicas 覆盖这个 minReplicas 值(在我的例子中,它被设置为 5)

我正在使用 kubectl apply -f ~/autoscale.yaml

部署我的 HPA

有没有办法处理这种情况?我是否需要创建某种共享逻辑,以便我的部署脚本可以计算出 minReplicas 值应该是什么?或者有没有更简单的方法来处理这个问题?

解决方法

我认为您还可以考虑以下两个选项:


使用 helm 通过查找功能管理应用程序的生命周期:

此解决方案背后的主要思想是在尝试使用 <script type="text/javascript"> var data=[{"Timestamp":"10:04 02/01/21","temperature":"5","humidity":"66"},{"Timestamp":"10:19 02/01/21","temperature":"6","humidity":"65"},{"Timestamp":"10:35 02/01/21","humidity":"64"},{"Timestamp":"10:50 02/01/21",{"Timestamp":"11:06 02/01/21",{"Timestamp":"11:21 02/01/21","temperature":"7","humidity":"63"},{"Timestamp":"11:34 02/01/21","temperature":"10",{"Timestamp":"04:21 02/01/21","temperature":"15",{"Timestamp":"04:36 02/01/21","temperature":"16","humidity":"61"},{"Timestamp":"04:51 02/01/21","humidity":"59"},{"Timestamp":"05:07 02/01/21","humidity":"60"},{"Timestamp":"05:22 02/01/21","temperature":"14","humidity":"61"}]; console.log(data); var timestamp=[]; var temperature=[]; var humidity=[]; $.each(data,function(i,item) { console.log(data[i]); timestamp.push(data[i].Timestamp); temperature.push(data[i].temperature); humidity.push(data[i].humidity); }); Highcharts.chart('container',{ title: { text: 'Solar Employment Growth by Sector,2010-2016' },subtitle: { text: 'Source: thesolarfoundation.com' },yAxis: { title: { text: 'Number of Employees' } },xAxis: { categories: timestamp,},legend: { layout: 'vertical',align: 'right',verticalAlign: 'middle' },plotOptions: { series: { label: { connectorAllowed: false },pointStart: 2010 } },series: [{ name: 'temperature',data: temperature },{ name: 'humidity',data: humidity }],responsive: { rules: [{ condition: { maxWidth: 500 },chartOptions: { legend: { layout: 'horizontal',align: 'center',verticalAlign: 'bottom' } } }] } }); </script> HPA/helm 创建/重新创建之前查询特定集群资源的状态(此处为 install)命令。

我的意思是每次升级应用程序堆栈之前都要检查当前的 upgrade 值。


minReplicas 资源单独管理到应用程序清单文件

在这里,您可以将此任务移交给专门的 HPA 操作员,该操作员可以与您的 HPA 共存,并根据特定时间表调整 CronJobs