AWS ALB unhealthy target after rolling update of deployment

4/24/2019

I have a EKS cluster with the aws-alb-ingress-controller controlling the setup of the AWS ALB pointing to the EKS cluster.

After a rolling update of one of the deployments, the application failed, causing the Pod to never start (The pod is stuck in status CrashLoopBackOff). However the previous version of the Pod is still running. But it seems like the status of the service is still unhealthy:

enter image description here

This means now all traffic is redirected to the default backend, a different service. In this case in Kubernetes the related service for the deployment is of type NodePort:

Type:                     NodePort
IP:                       172.20.186.130
Port:                     http-service  80/TCP
TargetPort:               5000/TCP
NodePort:                 http-service  31692/TCP
Endpoints:                10.0.3.55:5000

What is causing the endpoint to become unhealthy? I expected it to just redirect traffic to the old version of the Pod that is still running. Is there any way were I can ensure that the endpoint remains healthy?

-- Blokje5
amazon-alb
aws-eks
kubernetes
kubernetes-ingress

1 Answer

4/25/2019

The problem was that while in Kubernetes the application was healthy, the ALB load-balancer performed it's own health check. This health check was configured by default to expect a 200 response from the / endpoint, however for this specific application it did not return a 200 response on that endpoint.

Since the ALB is controlled by the alb-ingress-controller, I added an annotation on my ingress to configure the correct path: alb.ingress.kubernetes.io/healthcheck-path: /health. Since we are working with Spring Microservices this endpoint works for all our applications.

-- Blokje5
Source: StackOverflow