Not able to communicate with Pods locally created in AWS EC2 instance with kubernetes

10/14/2019

I have created simple nginx deplopyment in Ubuntu EC2 instance and exposed to port through service in kubernetes cluster, but I am unable to ping the pods even in local envirnoment. My Pods are running fine and service is also created successfully. I am sharing some outputs of commands below

kubectl get nodes

NAME               STATUS   ROLES    AGE     VERSION
ip-172-31-39-226   Ready    <none>   2d19h   v1.16.1
master-node        Ready    master   2d20h   v1.16.1

kubectl get po -o wide

NAME                                READY   STATUS    RESTARTS   AGE    IP              NODE               NOMINATED NODE   READINESS GATES
nginx-deployment-54f57cf6bf-dqt5v   1/1     Running   0          101m   192.168.39.17   ip-172-31-39-226   <none>           <none>
nginx-deployment-54f57cf6bf-gh4fz   1/1     Running   0          101m   192.168.39.16   ip-172-31-39-226   <none>           <none>
sample-nginx-857ffdb4f4-2rcvt       1/1     Running   0          20m    192.168.39.18   ip-172-31-39-226   <none>           <none>
sample-nginx-857ffdb4f4-tjh82       1/1     Running   0          20m    192.168.39.19   ip-172-31-39-226   <none>           <none>

kubectl get svc

NAME               TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes         ClusterIP      10.96.0.1       <none>        443/TCP        2d20h
nginx-deployment   NodePort       10.101.133.21   <none>        80:31165/TCP   50m
sample-nginx       LoadBalancer   10.100.77.31    <pending>     80:31854/TCP   19m

kubectl describe deployment nginx-deployment

Name:                   nginx-deployment
Namespace:              default
CreationTimestamp:      Mon, 14 Oct 2019 06:28:13 +0000
Labels:                 <none>
Annotations:            deployment.kubernetes.io/revision: 1
                        kubectl.kubernetes.io/last-applied-configuration:
                          {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"name":"nginx-deployment","namespace":"default"},"spec":{"replica...
Selector:               app=nginx
Replicas:               2 desired | 2 updated | 2 total | 2 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  app=nginx
  Containers:
   nginx:
    Image:        nginx:1.7.9
    Port:         80/TCP
    Host Port:    0/TCP
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   nginx-deployment-54f57cf6bf (2/2 replicas created)
Events:          <none>

Now I am unable to ping 192.168.39.17/16/18/19 from master, also not able to access curl 172.31.39.226:31165/31854 from master as well. Any help will be highly appreciated..

-- jeetdeveloper
amazon-web-services
cluster-computing
docker
kubernetes
nginx

1 Answer

10/14/2019

From the information, you have provided. And from the discussion we had the worker node has the Nginx pod running. And you have attached a NodePort Service and Load balancer Service to it.

The only thing which is missing here is the server from which you are trying to access this.

So, I tried to reach this URL 52.201.242.84:31165. I think all you need to do is whitelist this port for public access or the IP. This can be done via security group for the worker node EC2.

Now the URL above is constructed from the public IP of the worker node plus(+) the NodePort svc which is attached. Thus here is a simple formula you can use to get the exact address of the pod running.

Pod Access URL = Public IP of Worker Node + The NodePort 
-- damitj07
Source: StackOverflow