如何将不同的Aks部署放在同一资源组/集群中?

问题描述

当前状态: 我将所有服务都放在一个集群中,并且在一个resource_group下。我的问题是我必须每次都推送所有服务,并且部署速度越来越慢。

我想做什么:我想在目录中拆分每个服务,以便可以分别部署它。现在,我对每个服务都有一个后端,因此可以拥有自己的远程状态,并且在部署时不会更改任何内容。但是,我是否可以将所有服务都放在同一个resource_group中?如果是,我该如何实现?如果我需要为要单独部署的每个服务创建一个资源组,我仍然可以使用同一群集吗?

main.tf

provider "azurerm" {
  version = "2.23.0"
  features {}
}

resource "azurerm_resource_group" "main" {
  name     = "${var.resource_group_name}-${var.environment}"
  location = var.location

  timeouts {
    create = "20m"
    delete = "20m"
  }
}
resource "tls_private_key" "key" {
  algorithm = "RSA"
}

resource "azurerm_kubernetes_cluster" "main" {

  name                            = "${var.cluster_name}-${var.environment}"
  location                        = azurerm_resource_group.main.location
  resource_group_name             = azurerm_resource_group.main.name
  dns_prefix                      = "${var.dns_prefix}-${var.environment}"
  node_resource_group             = "${var.resource_group_name}-${var.environment}-worker"
  kubernetes_version = "1.18.6"

  linux_profile {
    admin_username = var.admin_username

    ssh_key {
      key_data = "${trimspace(tls_private_key.key.public_key_openssh)} ${var.admin_username}@azure.com"
    }
  }

  default_node_pool {
    name            = "default"
    node_count      = var.agent_count
    vm_size         = "Standard_B2s"
    os_disk_size_gb = 30
  }

  role_based_access_control {
    enabled = "false"
  }

  addon_profile {
    kube_dashboard {
      enabled = "true"
    }
  }

  network_profile {
    network_plugin    = "kubenet"
    load_balancer_sku = "Standard"
  }

  timeouts {
    create = "40m"
    delete = "40m"
  }

  service_principal {
    client_id     = var.client_id
    client_secret = var.client_secret
  }

  tags = {
    Environment = "Production"
  }
}

provider "kubernetes" {
  version          = "1.12.0"
  load_config_file = "false"

  host = azurerm_kubernetes_cluster.main.kube_config[0].host

  client_certificate = base64decode(
    azurerm_kubernetes_cluster.main.kube_config[0].client_certificate,)

  client_key = base64decode(azurerm_kubernetes_cluster.main.kube_config[0].client_key)
  cluster_ca_certificate = base64decode(
    azurerm_kubernetes_cluster.main.kube_config[0].cluster_ca_certificate,)
}

backend.tf(对于主要用户

terraform {
  backend "azurerm" {}
}

client.tf(我要单独部署的服务)

resource "kubernetes_deployment" "client" {
  Metadata {
    name = "client"

    labels = {
      serviceName = "client"
    }
  }

  timeouts {
    create = "20m"
    delete = "20m"
  }

  spec {

    progress_deadline_seconds = 600

    replicas = 1

    selector {
      match_labels = {
        serviceName = "client"
      }
    }

    template {
      Metadata {
        labels = {
          serviceName = "client"
        }
      }
      }
    }
  }
}

resource "kubernetes_service" "client" {
  Metadata {
    name = "client"
  }

  spec {
    selector = {
      serviceName = kubernetes_deployment.client.Metadata[0].labels.serviceName
    }

    port {
      port        = 80
      target_port = 80
    }
  }
}

backend.tf(对于客户端)

terraform {
  backend "azurerm" {
    resource_group_name = "test-storage"
    storage_account_name = "test"
    container_name = "terraform"
    key="test"
  }
}

deployment.sh

terraform -v
terraform init \
    -backend-config="resource_group_name=$TF_BACKEND_RES_GROUP" \
    -backend-config="storage_account_name=$TF_BACKEND_STORAGE_ACC" \
    -backend-config="container_name=$TF_BACKEND_CONTAINER" \

terraform plan
    
terraform apply -target="azurerm_resource_group.main" -auto-approve \
    -var "environment=$ENVIRONMENT" \
    -var "tag_version=$TAG_VERSION" \

PS:如果需要,我可以从头开始构建测试资源组。不用担心他的当前状态。

PS2:状态文件保存在正确的位置,没问题。

解决方法

如果要单独部署资源,则可以使用此选项查看terraform apply

  -target=resource       Resource to target. Operation will be limited to this
                         resource and its dependencies. This flag can be used
                         multiple times.

例如,仅部署资源组及其依赖项,

terraform apply -target="azurerm_resource_group.main"