I have a 3 node kubernetes cluster, a master and two nodes on AWS that I created with kubeadm (https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/)
I have created some deployments from the master node and I can see that pods are created on the 2 nodes for each of the deployments. But the issue is I can't access the pod ip from the master or from the other node. So the pod ip is only accessible on the node where is pod is running.
I have a service of nodeport type, so when the service(pod1:port) hits the other pod(pod2), it hangs and times out
Thanks.
It works either by disabling the firewall or by running below command.
I found this bug in my search. Looks like this is related to docker >=1.13 and flannel