Unable to view EKS cluster created from terraform resources

6/3/2020

We are creating an EKS cluster using terraform resources: aws_eks_cluster and aws_eks_node_group. After applying these resources, when we query for the nodes using kubectl, we are able to see the nodes in the cluster with all the auto scaling settings. We are even able to fetch the kubeconfig file of this cluster in out bastion instance. But we are not able to view this cluster from the AWS console under the EKS service. Is this an expected behavior? Below is our code:

resource "aws_eks_cluster" "eks_cluster" {
  name            = "${var.eks_cluster_name}"
  role_arn        = "${var.iam_role_master}"
  vpc_config {
    security_group_ids = ["${var.sg-eks-master}"]
    subnet_ids = ["${var.subnet_private1}", "${var.subnet_private2}"]
    endpoint_private_access= true
    endpoint_public_access = true
        public_access_cidrs = ["<ip_range>"]
  }
}

resource "aws_eks_node_group" "example" {
  cluster_name    = "${var.eks_cluster_name}"
  node_group_name = "ng-${var.eks_cluster_name}"
  node_role_arn   = "${var.iam_role_node}"
  subnet_ids      = ["${var.subnet_private1}", "${var.subnet_private2}"]
  ami_type = "${var.image_id}"
  instance_types = "${var.instance_type}"
  
  scaling_config {
    desired_size = 1
    max_size     = 4
    min_size     = 2
  }
-- Meghana B Srinath
amazon-eks
kubernetes
networking
terraform
terraform-provider-aws

1 Answer

6/4/2020

Turns out, it was just a permissions issue. The user was not having view access to EKS clusters. But the problem was that no warning/error message was shown in the console saying that this is a role issue. Only when we checked the roles for the user, we figured this out.

-- Meghana B Srinath
Source: StackOverflow