How can I configure an AWS EKS autoscaler with Terraform?

9/13/2019

I'm using the AWS EKS provider (github.com/terraform-aws-modules/terraform-aws-eks ). I'm following along the tutorial with https://learn.hashicorp.com/terraform/aws/eks-intro

However this does not seem to have autoscaling enabled... It seems it's missing the cluster-autoscaler pod / daemon?

Is Terraform able to provision this functionality? Or do I need to set this up following a guide like: https://eksworkshop.com/scaling/deploy_ca/

-- Chris Stryczynski
amazon-eks
kubernetes
terraform

2 Answers

9/13/2019

You can deploy Kubernetes resources using Terraform. There is both a Kubernetes provider and a Helm provider.

data "aws_eks_cluster_auth" "authentication" {
  name = "${var.cluster_id}"
}

provider "kubernetes" {
  # Use the token generated by AWS iam authenticator to connect as the provider does not support exec auth
  # see: https://github.com/terraform-providers/terraform-provider-kubernetes/issues/161
  host = "${var.cluster_endpoint}"

  cluster_ca_certificate = "${base64decode(var.cluster_certificate_authority_data)}"
  token                  = "${data.aws_eks_cluster_auth.authentication.token}"
  load_config_file       = false
}

provider "helm" {
  install_tiller  = "true"
  tiller_image    = "gcr.io/kubernetes-helm/tiller:v2.12.3"
}

resource "helm_release" "cluster_autoscaler" {
  name       = "cluster-autoscaler"
  repository = "stable"
  chart      = "cluster-autoscaler"
  namespace  = "kube-system"
  version    = "0.12.2"

  set {
    name  = "autoDiscovery.enabled"
    value = "true"
  }

  set {
    name  = "autoDiscovery.clusterName"
    value = "${var.cluster_name}"
  }

  set {
    name  = "cloudProvider"
    value = "aws"
  }

  set {
    name  = "awsRegion"
    value = "${data.aws_region.current_region.name}"
  }

  set {
    name  = "rbac.create"
    value = "true"
  }

  set {
    name  = "sslCertPath"
    value = "/etc/ssl/certs/ca-bundle.crt"
  }
}
-- Blokje5
Source: StackOverflow

9/22/2019

This answer below is still not complete... But at least it gets me partially further...

1.

kubectl create clusterrolebinding add-on-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default
helm install stable/cluster-autoscaler --name my-release --set "autoscalingGroups[0].name=demo,autoscalingGroups[0].maxSize=10,autoscalingGroups[0].minSize=1" --set rbac.create=true

And then manually fix the certificate path:

kubectl edit deployments my-release-aws-cluster-autoscaler 

replace the following:

path: /etc/ssl/certs/ca-bundle.crt

With

path: /etc/ssl/certs/ca-certificates.crt

2.

In the AWS console, give AdministratorAccess policy to the terraform-eks-demo-node role.

3.

Update the nodes parameter with (kubectl edit deployments my-release-aws-cluster-autoscaler)

        - --nodes=1:10:terraform-eks-demo20190922124246790200000007
-- Chris Stryczynski
Source: StackOverflow