AWS EKS: Service(LoadBalancer) running but not responding to requests

2/27/2019

I am using AWS EKS. I have launched my django app with help of gunicorn in kubernetes cluster.

---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: api
  labels:
    app: api
    type: web
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: api
        type: web
    spec:
      containers:
      - name: vogofleet
        image: xx.xx.com/api:image2
        imagePullPolicy: Always
        env:
        - name: DATABASE_HOST
          value: "test-db-2.xx.xx.xx.xx.com"
        - name: DATABASE_PASSWORD
          value: "xxxyyyxxx"
        - name: DATABASE_USER
          value: "admin"
        - name: DATABASE_PORT
          value: "5432"
        - name: DATABASE_NAME
          value: "test"
        ports:
        - containerPort: 9000

I have applied these changes and I can see my pod running in kubectl get pods

Now, I am trying to expose it via service object. Here is my service object,

# service
---
apiVersion: v1
kind: Service
metadata:
  name: api
  labels:
    app: api
spec:
  ports:
    - port: 9000
      protocol: TCP
      targetPort: 9000
  selector:
    app: api
    type: web
  type: LoadBalancer

The service is also up and running. It has given me the external IP to access the service, which is the address of the load balancer. I can see that it has launched a new load balancer in the AWS console. But I am not able to access it from browser. It says that address didn't return any data. The ELB is showing the healthcheck on instances as OutOfService.

There are other pods also running in the cluster. When I run printenv in those pods, here is the result,

root@consumer-9444cf7cd-4dr5z:/consumer# printenv | grep API
API_PORT_9000_TCP_ADDR=172.20.140.213
API_SERVICE_HOST=172.20.140.213
API_PORT_9000_TCP_PORT=9000
API_PORT=tcp://172.20.140.213:9000
API_PORT_9000_TCP=tcp://172.20.140.213:9000
API_PORT_9000_TCP_PROTO=tcp
API_SERVICE_PORT=9000

And I tried to check connection to my api pod,

root@consumer-9444cf7cd-4dr5z:/consumer# telnet $API_PORT_9000_TCP_ADDR $API_PORT_9000_TCP_PORT
Trying 172.20.140.213...
telnet: Unable to connect to remote host: Connection refused

But, when I do port-forward to my localhost, I can access it on my localhost,

$ kubectl port-forward api-6d94dcb65d-br6px 9000

and check the connection,

$ nc -vz localhost 9000
found 0 associations
found 1 connections:
     1: flags=82<CONNECTED,PREFERRED>
  outif lo0
  src ::1 port 53299
  dst ::1 port 9000
  rank info not available
  TCP aux info available

Connection to localhost port 9000 [tcp/cslistener] succeeded!

Why am I not able to access it from other containers and from public internet? And, The security groups are correct.

-- Luv33preet
amazon-eks
amazon-web-services
kubernetes
kubernetes-service

0 Answers