Kubernetes loadbalancer stops serving traffic if using local traffic policy

9/12/2018

Currently I am having an issue with one of my services set to be a load balancer. I am trying to get the source ip preservation like its stated in the docs. However when I set the externalTrafficPolicy to local I lose all traffic to the service. Is there something I'm missing that is causing this to fail like this?

Load Balancer Service:

apiVersion: v1
kind: Service
metadata:
  labels:
    app: loadbalancer
    role: loadbalancer-service
  name: lb-test
  namespace: default
spec:
  clusterIP: 10.3.249.57
  externalTrafficPolicy: Local
  ports:
  - name: example service
    nodePort: 30581
    port: 8000
    protocol: TCP
    targetPort: 8000
  selector:
    app: loadbalancer-example
    role: example
  type: LoadBalancer
status:
  loadBalancer:
    ingress:
    - ip: *example.ip*
-- Pablo Marti Cordero
google-kubernetes-engine
kubernetes
load-balancing
networking
service

1 Answer

9/12/2018

Could be several things. A couple of suggestions:

  1. Your service is getting an external IP and doesn't know how to reply back based on the local IP address of the pod.
    • Try running a sniffer on your pod see if you are getting packets from the external source.
    • Try checking at logs of your application.
  2. Healthcheck in your load balancer is failing. Check the load balancer for your service on GCP console.
    • Check the instance port is listening. (probably not if your health check is failing)

lb

Hope it helps.

-- Rico
Source: StackOverflow