Trying to convert from AWS classic load balancer to application load balancer in Amazon EKS

8/18/2020

I have everything working using a classic load balancer. I would now like to update my Kubernetes environment to use an application load balancer instead of the classic load balancer. I have tried a few tutorials but no luck so far. I keep getting 503 errors after I deploy.

I brought my cluster up with eksctl then installed and ran the sample application in this tutorial https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html. I did get an ALB up and all worked as it should with the sample application outlined in the tutorial. I tried modifying the YAML for my environment to use an ALB and keep getting 503 errors. I am not sure what to try next. I suspect my issue might be that I have the Nginx and my application in the same container(which I would like to keep if possible).

Here is the YAML for my application that I updated to try to get the ALB working:

kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-config
data:
  nginx.conf: |
    events {
    }
    http {
    include    /etc/nginx/mime.types;
      server {
        listen 80 default_server;
        listen [::]:80 default_server;
        server_name test-ggg.com www.test-ggg.com;
        if ($http_x_forwarded_proto = "http") {
          return 301 https://$server_name$request_uri;
        }
        root /var/www/html;
        index index.php index.html;
        location static {
          alias /var/www/html;
        }
        error_log  /var/log/nginx/error.log;
        access_log /var/log/nginx/access.log;
        location ~ \.php$ {
          fastcgi_split_path_info ^(.+\.php)(/.+)$;
          fastcgi_pass 127.0.0.1:9000;
          fastcgi_index index.php;
          include fastcgi_params;
          fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
          fastcgi_param PATH_INFO $fastcgi_path_info;
        }
        location / {
          try_files $uri $uri/ /index.php?$query_string;
          gzip_static on;
        }
      }
    }
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deployment
  labels:
    name: deployment
spec:
  replicas: 1
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 2
  selector:
    matchLabels:
      name: templated-pod
  template:
    metadata:
      name: deployment-template
      labels:
        name: templated-pod
    spec:
      volumes:
        - name: app-files
          emptyDir: {}
        - name: nginx-config-volume
          configMap:
            name: nginx-config
      containers:
        - image:  xxxxxxx.dkr.ecr.us-east-2.amazonaws.com/test:4713
          name: app
          volumeMounts:
            - name: app-files
              mountPath: /var/www/html
          lifecycle:
            postStart:
              exec:
                command: ["/bin/sh", "-c", "cp -r /var/www/public/. /var/www/html"]
          resources:
            limits:
              cpu: 100m
            requests:
              cpu: 50m
        - image: nginx:alpine
          name: nginx
          volumeMounts:
            - name: app-files
              mountPath: /var/www/html
            - name: nginx-config-volume
              mountPath: /etc/nginx/nginx.conf
              subPath: nginx.conf
          resources:
            limits:
              cpu: 100m
            requests:
              cpu: 50m
          ports:
          - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: "service-alb"
  namespace: default
  annotations:
    alb.ingress.kubernetes.io/target-group-attributes: slow_start.duration_seconds=45
    alb.ingress.kubernetes.io/healthcheck-interval-seconds: '5'
    alb.ingress.kubernetes.io/healthcheck-timeout-seconds: '2'
    alb.ingress.kubernetes.io/healthy-threshold-count: '2'
    alb.ingress.kubernetes.io/unhealthy-threshold-count: '3'
spec:
  ports:
    - port: 80
      targetPort: 80
      name: http
  type: NodePort
  selector:
    app: templated-pod
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: app-ingress
  namespace: default
  annotations:
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/target-type: ip
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-2:dddddddd:certificate/f61c2837-484c-ddddddddd-bab7c4d4452c
    alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
    alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
  labels:
    app: app-ingress
spec:
  rules:
  - host: test-ggg.com
    http:
      paths:
      - backend:
          serviceName: "service-alb"
          servicePort: 80
        path: /*

Here is the yaml with the classic load balancer. Everything works when I use this :

kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-config
data:
  nginx.conf: |
    events {
    }
    http {
    include    /etc/nginx/mime.types;
      server {
        listen 80 default_server;
        listen [::]:80 default_server;
        server_name test-ggg.com www.test-ggg.com;
        if ($http_x_forwarded_proto = "http") {
          return 301 https://$server_name$request_uri;
        }
        root /var/www/html;
        index index.php index.html;
        location static {
          alias /var/www/html;
        }
        error_log  /var/log/nginx/error.log;
        access_log /var/log/nginx/access.log;
        location ~ \.php$ {
          fastcgi_split_path_info ^(.+\.php)(/.+)$;
          fastcgi_pass 127.0.0.1:9000;
          fastcgi_index index.php;
          include fastcgi_params;
          fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
          fastcgi_param PATH_INFO $fastcgi_path_info;
        }
        location / {
          try_files $uri $uri/ /index.php?$query_string;
          gzip_static on;
        }
      }
    }
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deployment
  labels:
    name: deployment
spec:
  replicas: 1
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 2
  selector:
    matchLabels:
      name: templated-pod
  template:
    metadata:
      name: deployment-template
      labels:
        name: templated-pod
    spec:
      volumes:
        - name: app-files
          emptyDir: {}
        - name: nginx-config-volume
          configMap:
            name: nginx-config
      containers:
        - image:  99ddddddddd.dkr.ecr.us-east-2.amazonaws.com/test:4713
          name: app
          volumeMounts:
            - name: app-files
              mountPath: /var/www/html
          lifecycle:
            postStart:
              exec:
                command: ["/bin/sh", "-c", "cp -r /var/www/public/. /var/www/html"]
          resources:
            limits:
              cpu: 100m
            requests:
              cpu: 50m
        - image: nginx:alpine
          name: nginx
          volumeMounts:
            - name: app-files
              mountPath: /var/www/html
            - name: nginx-config-volume
              mountPath: /etc/nginx/nginx.conf
              subPath: nginx.conf
          resources:
            limits:
              cpu: 100m
            requests:
              cpu: 50m
          ports:
          - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: service-loadbalancer
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
    service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
    service.beta.kubernetes.io/aws-load-balancer-ssl-cert: 	arn:aws:acm:us-east-2:dddddddd:certificate/f61c2837-484c-4fac-a26c-dddddddd4452c
spec:
  selector:
    name: templated-pod
  ports:
    - name: http
      port: 80
      targetPort: 80
    - name: https
      port: 443
      targetPort: 80
  type: LoadBalancer
-- ErnieAndBert
amazon-eks
amazon-elb
aws-application-load-balancer
kubernetes

1 Answer

8/19/2020

After some tutorial help I learned more about services, selectors and pod naming! (Great tutorial - https://www.youtube.com/watch?v=sGZx3OjMPQI)

I had the pod named - "name: templated-pod" I had the selector in the service looking for:

  selector:
    app: templated-pod

It could not make the connection!

I changed the selector to the following and it worked :

  selector:
    name: templated-pod

Hope this helps others!!

-- ErnieAndBert
Source: StackOverflow