Assign memory ressource of Pods from Terraform

5/7/2020

I have a K8S cluster on GCP where I have to run Data Science workload. Some of they are in status "Evicted" because

The node was low on resource: memory. Container base was using 5417924Ki, which exceeds its request of 0.

I manage my architecture with Terraform and know how to manage cluster auto-scaling but I have no idea, even after reading the doc, how to manage this at a Pod level

resource "google_container_cluster" "k8s_cluster" {
  name        = "my-cluster-name
  description = ""
  location = var.default_region
  network = var.network
  subnetwork = var.subnetwork

  initial_node_count = 1
  remove_default_node_pool = true

  ip_allocation_policy {
    # VPC-native cluster using alias IP addresses
    cluster_secondary_range_name = "gke-pods"
    services_secondary_range_name = "gke-services"
  }

  maintenance_policy {
    daily_maintenance_window {
      start_time = "03:00"
    }
  }

  master_authorized_networks_config {
    cidr_blocks {
      display_name = var.airflow.display_name
      cidr_block = var.airflow.cidr_block
    }

    cidr_blocks {
      display_name = var.gitlab.display_name
      cidr_block = var.gitlab.cidr_block
    }
  }

  network_policy {
    enabled = false
  }

  private_cluster_config {
    enable_private_endpoint = true
    enable_private_nodes = true
    master_ipv4_cidr_block = var.vpc_range_k8s_master
  }

  resource_labels = {
    zone = var.zone
    role = var.role
    env = var.environment
  }

  # Disable basic auth and client certificate
  master_auth {
    username = ""
    password = ""

    client_certificate_config {
      issue_client_certificate = false
    }
  }

  cluster_autoscaling {
    enabled = true
    resource_limits {
      resource_type = "cpu"
      minimum = 1
      maximum = 4
    }
    resource_limits {
      resource_type = "memory"
      minimum = 1
      maximum = 2
    }
  }
}
-- Ragnar
google-kubernetes-engine
kubernetes
terraform
terraform-provider-gcp

0 Answers