ALB Ingress Controller on AWS

1/10/2020

I'm trying to setup an ALB Ingress Controller on AWS-EKS, exactly as the following tutorial describe: ingress_controller_alb, but I cannot get an ingress address.

Indeed, if I run the following command: kubectl get ingress/2048-ingress -n 2048-game, after 10 minutes I get no address. Any idea?

-- duns
amazon-eks
amazon-web-services
aws-eks
eks
kubernetes-ingress

3 Answers

1/16/2020

I was struggling with the same issue, but finally got it working after following @MaggieO steps above. A couple of things to consider:

  1. Add public and private subnets to your EKS cluster. Make sure your public subnets are tagged with "kubernetes.io/role/elb":"1". If creating a managed node group, only select private subnets for placement of your worker nodes.
  2. Make sure your IAM role for you worker nodes has the policies AmazonEKSWorkerNodePolicy, AmazonEC2ContainerRegistryReadOnly, AmazonEKS_CNI_Policy, and the custom policy defined here https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.2/docs/examples/iam-policy.json.
  3. Examine your ingress controller logs, they are helpful.

    kubectl logs -n kube-system [name of your ingress controller]

-- programmerj
Source: StackOverflow

1/13/2020

Problem may be in version of aws-controller you are using - you are using old version of ingress controller - 1.0.0, new one is 1.1.3.

I advice you to take look at this documentation: ingress-controller-alb.

1. Download sample ALB ingress controller manifest

wget https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.3/docs/examples/alb-ingress-controller.yaml

2. Configure the ALB ingress controller manifest

At minimum, edit the following variables:

--cluster-name=devCluster: name of the cluster. AWS resources will be tagged with kubernetes.io/cluster/devCluster:owned

If ec2metadata is unavailable from the controller pod, edit the following variables:

--aws-vpc-id=vpc-xxxxxx: vpc ID of the cluster.
--aws-region=us-west-1: AWS region of the cluster.

3. Deploy the RBAC roles manifest

kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.3/docs/examples/rbac-role.yaml

4. Deploy the ALB ingress controller manifest

kubectl apply -f alb-ingress-controller.yaml

5. Verify the deployment was successful and the controller started

kubectl logs -n kube-system $(kubectl get po -n kube-system | egrep -o "alb-ingress[a-zA-Z0-9-]+")

You should be able to display output similar to the following:

-------------------------------------------------------------------------------
AWS ALB Ingress controller
Release:    1.0.0
Build:      git-7bc1850b
Repository: https://github.com/kubernetes-sigs/aws-alb-ingress-controller.git
-------------------------------------------------------------------------------

Then you can deploy sample application

Execute following commands:

kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.3/docs/examples/2048/2048-namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.3/docs/examples/2048/2048-deployment.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.3/docs/examples/2048/2048-service.yaml

Deploy an Ingress resource for the 2048 game:

kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.3/docs/examples/2048/2048-ingress.yaml

After few seconds, verify that the Ingress resource is enabled:

kubectl get ingress/2048-ingress -n 2048-game
-- MaggieO
Source: StackOverflow

1/27/2020

Thank you for your replies!

I think the problem is the cluster creation that results in cluster creation without EC2 instances, with the command eksctl cluster create -f cluster.yaml

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
  name: test
  region: eu-central-1
  version: "1.14"
vpc:
  id: vpc-50b17738
  subnets:
    private:
      eu-central-1a: { id: subnet-aee763c6 }
      eu-central-1b: { id: subnet-bc2ee6c6 }
      eu-central-1c: { id: subnet-24734d6e }
nodeGroups:
  - name: ng-1-workers
    labels: { role: workers }
    instanceType: t3.medium
    desiredCapacity: 2
    volumeSize: 5
    privateNetworking: true

I try with node groups and with managed node groups, but I get the following timeout error:

...
[]  nodegroup "ng-1-workers" has 0 node(s)
[]  waiting for at least 2 node(s) to become ready in "ng-1-workers"
Error: timed out (after 25m0s) waiting for at least 2 nodes to join the cluster and become ready in "ng-1-workers"
-- duns
Source: StackOverflow