How do I fix 'Failed to Connect' from an external load balancer deployed via a service in k8s?

9/25/2019

I've deployed a pod in AKS and I'm trying to connect to it via an external load balancer.

The items I done for troubleshooting are:

  • Verified (using kubectl) pod deployed in k8s and is running properly.
  • Verified (using netstat) Network port 80 is in ‘listening’. I logged into the pod using 'kubectl exec'

The .yaml file I used to deploy is:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: qubo
  namespace: qubo-gpu
spec:
  replicas: 1
  selector:
    matchLabels:
      app: qubo
  template:
    metadata:
      labels:
        app: qubo
    spec:
      containers:
        - name: qubo-ctr
          image: <Blanked out>
          resources:
            limits:
              nvidia.com/gpu: 1
          command: ["/app/xqx"]
          args: ["80"]
          ports:
            - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: api
  namespace: qubo-gpu
  annotations:
spec:
  type: LoadBalancer
  ports:
    - protocol: TCP
      port: 80
  selector:
    app: qubo
-- Rajesh
azure
azure-load-balancer
kubernetes

1 Answer

9/27/2019

Turned out to be my bug in the code of how I opened the socket. In hopes this will help someone else, this is how I went about troubleshooting:

  1. Got IP for pod: kubectl get pods -o wide
  2. Created a new ubuntu pod in cluster: kubectl run -it --rm --restart=Never --image=ubuntu:18.04 ubuntu bash
  3. Downloaded curl to new pod: apt-get update && apt-get install -y curl
  4. Tried to curl to the pod IP (from step 1): curl -v -m5 http://<Pod IP>:80

Step 4 failed for me, however, I was able to run the docker container successfully on my machine and connect. Issue was that I opened the connection as localhost instead of 0.0.0.0.

-- Rajesh
Source: StackOverflow