Kubernetes defaults to http health checks when using AWS Network Load balancer when externalTrafficPolicy is set to Local

4/22/2020

I'm trying to set up an AWS NLB for service running on Kubernetes cluster with TCP health checks for the backend service. Kubernetes always tends to create HTTP health checks for any service when the externalTrafficPolicy is set to Local and only creates TCP health checks if set to Cluster

.metadata.annotations.service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
.metadata.annotations.service.beta.kubernetes.io/aws-load-balancer-internal: "true"
.metadata.annotations.service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
.spec.ports.protocol: TCP
.spec.externalTrafficPolicy : **Cluster/Local**

Only when I switch externalTrafficPolicy to Cluster from Local, i can get my TCP health checks set for my target groups, else it tends to setup a HTTP check on /healthz , which would fail for me.

I'm trying to avoid additional load balancing via kubeproxy with cluster externalTrafficPolicy, which could create hop across node boundaries.

Is there something that I'm missing?

-- drub
amazon-web-services
aws-load-balancer
kubernetes

0 Answers