Cannot connect to kubernetes pod from master: i/o timeout

7/18/2018

I configured kubernetes cluster with one master and one node, the machines that run master and node aren't in the same network. For networking I installed calico and all the pods are running. For testing the cluster I used get shell example and when I run the following command from master machine:

kubectl exec -it shell-demo -- /bin/bash

I received the error:

Error from server: error dialing backend: dial tcp 10.138.0.2:10250: i/o timeout

The ip 10.138.0.2 is on eth0 interface on the node machine.

What configuration do I need to make to access the pod from master?

EDIT

kubectl get all --all-namespaces -o wide output:

default       shell-demo                           1/1       Running   0          10s       192.168.4.2   node-1
kube-system   calico-node-7wlqw                    2/2       Running   0          49m       10.156.0.2    instance-1
kube-system   calico-node-lnk6d                    2/2       Running   0          35s       10.132.0.2    node-1
kube-system   coredns-78fcdf6894-cxgc2             1/1       Running   0          50m       192.168.0.5   instance-1
kube-system   coredns-78fcdf6894-gwwjp             1/1       Running   0          50m       192.168.0.4   instance-1
kube-system   etcd-instance-1                      1/1       Running   0          49m       10.156.0.2    instance-1
kube-system   kube-apiserver-instance-1            1/1       Running   0          49m       10.156.0.2    instance-1
kube-system   kube-controller-manager-instance-1   1/1       Running   0          49m       10.156.0.2    instance-1
kube-system   kube-proxy-b64b5                     1/1       Running   0          50m       10.156.0.2    instance-1
kube-system   kube-proxy-xxkn4                     1/1       Running   0          35s       10.132.0.2    node-1
kube-system   kube-scheduler-instance-1            1/1       Running   0          49m       10.156.0.2    instance-1

Thanks!

-- Dorin
kubernetes

2 Answers

12/6/2019

I had this issue too. Don't know if you're on Azure, but I am, and I solved this by deleting the tunnelfront pod and letting Kubernetes restart it:

kubectl -n kube-system delete po -l component=tunnel

which is a solution I got from here

-- Lee Richardson
Source: StackOverflow

7/18/2018

Before checking your status on Master .Please verify below things.

Please run below commands to check cluster info :

setenforce 0
firewall-cmd --permanent --add-port=6443/tcp
firewall-cmd --permanent --add-port=2379-2380/tcp
firewall-cmd --permanent --add-port=10250/tcp
firewall-cmd --permanent --add-port=10251/tcp
firewall-cmd --permanent --add-port=10252/tcp
firewall-cmd --permanent --add-port=10255/tcp
firewall-cmd --reload
modprobe br_netfilter
echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables

Run above command on both Master and worker node.

Then run below commands to check node status.

kubectl get nodes

-- sonu rawal
Source: StackOverflow