Kubernetes Nginx ingress - failed to ensure load balancer: could not find any suitable subnets for creating the ELB

6/26/2021

I would like to deploy a minimal k8s cluster on AWS with Terraform and install a Nginx Ingress Controller with Helm.

The terraform code:

provider "aws" {
  region = "us-east-1"
}

data "aws_eks_cluster" "cluster" {
  name = module.eks.cluster_id
}

data "aws_eks_cluster_auth" "cluster" {
  name = module.eks.cluster_id
}

variable "cluster_name" {
  default = "my-cluster"
}

variable "instance_type" {
  default = "t2.large"
}

provider "kubernetes" {
  host                   = data.aws_eks_cluster.cluster.endpoint
  cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
  token                  = data.aws_eks_cluster_auth.cluster.token
  load_config_file       = false
  version                = "~> 1.11"
}

data "aws_availability_zones" "available" {
}


module "vpc" {
  source  = "terraform-aws-modules/vpc/aws"
  version = "3.0.0"

  name                 = "k8s-${var.cluster_name}-vpc"
  cidr                 = "172.16.0.0/16"
  azs                  = data.aws_availability_zones.available.names
  private_subnets      = ["172.16.1.0/24", "172.16.2.0/24", "172.16.3.0/24"]
  public_subnets       = ["172.16.4.0/24", "172.16.5.0/24", "172.16.6.0/24"]
  enable_nat_gateway   = true
  single_nat_gateway   = true
  enable_dns_hostnames = true

  public_subnet_tags = {
    "kubernetes.io/cluster/${var.cluster_name}" = "shared"
    "kubernetes.io/role/elb"                    = "1"
  }

  private_subnet_tags = {
    "kubernetes.io/cluster/${var.cluster_name}" = "shared"
    "kubernetes.io/role/internal-elb"           = "1"
  }
}

module "eks" {
  source  = "terraform-aws-modules/eks/aws"
  version = "12.2.0"

  cluster_name    = "eks-${var.cluster_name}"
  cluster_version = "1.18"
  subnets         = module.vpc.private_subnets

  vpc_id = module.vpc.vpc_id

  worker_groups = [
   {
     name                          = "worker-group-1"
     instance_type                 = "t3.small"
     additional_userdata           = "echo foo bar"
     asg_desired_capacity          = 2
   },
   {
     name                          = "worker-group-2"
     instance_type                 = "t3.small"
     additional_userdata           = "echo foo bar"
     asg_desired_capacity          = 1
   },
  ]
  

  write_kubeconfig   = true
  config_output_path = "./"

  workers_additional_policies = [aws_iam_policy.worker_policy.arn]
}

resource "aws_iam_policy" "worker_policy" {
  name        = "worker-policy-${var.cluster_name}"
  description = "Worker policy for the ALB Ingress"

  policy = file("iam-policy.json")
}

The installation performs correctly: helm install my-release nginx-stable/nginx-ingress

NAME: my-release
LAST DEPLOYED: Sat Jun 26 22:17:28 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The NGINX Ingress Controller has been installed.

The kubectl describe service my-release-nginx-ingress returns:

Error syncing load balancer: failed to ensure load balancer: could not find any suitable subnets for creating the ELB

The VPC is created and the public subnet seems to be correctly tagged, what is lacking to make the Ingress aware of the public subnet ?

-- val
kubernetes
nginx
nginx-ingress
subnet
vpc

1 Answer

6/27/2021

In the eks modules you are prefixing the cluster name with eks-:

cluster_name    = "eks-${var.cluster_name}"

However you do not use the prefix in your subnet tags:

"kubernetes.io/cluster/${var.cluster_name}" = "shared"

Drop the prefix from cluster_name and add it to the cluster name variable (assuming you want the prefix at all). Alternatively, you could add the prefix to your tags to fix the issue, but that approach makes it easier to introduce inconsistencies.

-- Mathew Tinsley
Source: StackOverflow