In some cases, we have Services that get no response when trying to access them. Eg Chrome shows ERR_EMPTY_RESPONSE, and occasionally we get other errors as well, like 408, which I'm fairly sure is returned from the ELB, not our application itself.
After a long involved investigation, including ssh'ing into the nodes themselves, experimenting with load balancers and more, we are still unsure at which layer the problem actually exists: either in Kubernetes itself, or in the backing services from Amazon EKS (ELB or otherwise)
What else could cause behaviour like this?
adding
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
Fixed this for me
After much investigation, we were fighting a number of issues: * Our application didn't always behave the way we were expecting. Always check that first. * In our Kubernetes Service manifest, we had set the externalTrafficPolicy: Local
, which probably should work, but was causing us problems. (This was with using Classic Load Balancer) service.beta.kubernetes.io/aws-load-balancer-type: "clb"
. So if you have problems with CLB, either remove the externalTrafficPolicy
or explicitly set it to the default "Cluster" value.
So our manifest is now: kind: Service apiVersion: v1 metadata: name: apollo-service annotations: service.beta.kubernetes.io/aws-load-balancer-type: "clb" service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:REDACTED" service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443" service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
spec: externalTrafficPolicy: Cluster selector: app: apollo ports: - name: http protocol: TCP port: 80 targetPort: 80 - name: https protocol: TCP port: 443 targetPort: 80 type: LoadBalancer