Cannot access Kubernetes pods exposed external ip on google cloud

10/6/2018

I have created a sample node.js app and other required files (deployment.yml, service.yml) but I am not able to access the external IP of the service.

#kubectl get services

    NAME         TYPE           CLUSTER-IP    EXTERNAL-IP      PORT(S)          AGE
    kubernetes   ClusterIP      10.7.240.1    <none>           443/TCP          23h
    node-api     LoadBalancer   10.7.254.32   35.193.227.250   8000:30164/TCP   4m37s

#kubectl get pods
NAME                        READY   STATUS    RESTARTS   AGE
node-api-6b9c8b4479-nclgl   1/1     Running   0          5m55s

#kubectl describe svc node-api
Name:                     node-api
Namespace:                default
Labels:                   <none>
Annotations:              <none>
Selector:                 app=node-api
Type:                     LoadBalancer
IP:                       10.7.254.32
LoadBalancer Ingress:     35.193.227.250
Port:                     <unset>  8000/TCP
TargetPort:               8000/TCP
NodePort:                 <unset>  30164/TCP
Endpoints:                10.4.0.12:8000
Session Affinity:         None
External Traffic Policy:  Cluster
Events:
  Type    Reason                Age    From                Message
  ----    ------                ----   ----                -------
  Normal  EnsuringLoadBalancer  6m19s  service-controller  Ensuring load balancer
  Normal  EnsuredLoadBalancer   5m25s  service-controller  Ensured load balancer

When I try to do a curl on external ip it gives connection refused

curl 35.193.227.250:8000
curl: (7) Failed to connect to 35.193.227.250 port 8000: Connection refused

I have exposed port 8000 in Dockerfile also. Let me know if I am missing anything.

-- Siddharth Chaurasia
google-cloud-platform
google-kubernetes-engine
kubernetes

1 Answer

12/6/2018

Looking at your description on this thread it seems everything is fine. Here is what you can try:

  1. SSH to the GKE node where the pod is running. You can get the node name by running the same command you used with "-o wide" flag.

    $ kubectl get pods -o wide

After that doing the SSH, try to curl Cluster as well as Service IP to see if you get response or not.

  1. Try to SSH to the pod

    $ kubectl exec -it -- /bin/bash

After that, run local host to see if you get response or not

$ curl localhost

So if you get response upon trying above troubleshooting steps then it could be an issue underlying at the GKE. You can file a defect report here.

If you do not get any response while trying the above steps, it is possible that you have misconfigured the cluster somewhere.

This seems to me a good starting point for troubleshooting your use case.

-- Rahi
Source: StackOverflow