Can not curl pod on different node in Kubernetes

2/5/2019

I have a kubernetes cluster (using flannel):

kubectl version
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2", GitCommit:"17c77c7898218073f14c8d573582e8d2313dc740", GitTreeState:"clean", BuildDate:"2018-10-24T06:54:59Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.5", GitCommit:"51dd616cdd25d6ee22c83a858773b607328a18ec", GitTreeState:"clean", BuildDate:"2019-01-16T18:14:49Z", GoVersion:"go1.10.7", Compiler:"gc", Platform:"linux/amd64"}

kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
scheduler            Healthy   ok
controller-manager   Healthy   ok
etcd-0               Healthy   {"health": "true"}

Everything seems to be running okay

$ kubectl get pods -n kube-system
NAME                            READY   STATUS    RESTARTS   AGE
coredns-576cbf47c7-q7ncm        1/1     Running   1          30m
coredns-576cbf47c7-tclp8        1/1     Running   1          30m
etcd-kube1                      1/1     Running   1          30m
kube-apiserver-kube1            1/1     Running   1          30m
kube-controller-manager-kube1   1/1     Running   1          30m
kube-flannel-ds-amd64-6vlkx     1/1     Running   1          30m
kube-flannel-ds-amd64-7twk8     1/1     Running   1          30m
kube-flannel-ds-amd64-rqzr7     1/1     Running   1          30m
kube-proxy-krfzk                1/1     Running   1          30m
kube-proxy-vrssw                1/1     Running   1          30m
kube-proxy-xlrgz                1/1     Running   1          30m
kube-scheduler-kube1            1/1     Running   1          30m

Now I've deployed 2 pods (without a service). 2 NGinx pods. I've also created a busybox pod. When I curl from inside the busybox pod to the nginx pod on the same node it works:

kubectl get pods -o wide
NAME                   READY   STATUS    RESTARTS   AGE   IP           NODE    NOMINATED NODE
busybox                1/1     Running   2          30m   10.244.2.5   kube2   <none>
nginx-d55b94fd-l7swz   1/1     Running   1          30m   10.244.2.4   kube2   <none>
nginx-d55b94fd-zg7sj   1/1     Running   1          30m   10.244.1.6   kube3   <none>

curl:

kubectl exec busybox -- curl 10.244.2.4
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   612  100   612    0     0   357k      0 --:--:-- --:--:-- --:--:--  597k
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

But when I curl the pod on a different node:

 kubectl exec busybox -- curl 10.244.1.6
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- 10.244.1.6 port 80: No route to host

How can I debug this? What can be wrong? (All firewalls are turned off/disabled)

additional info:

$ sysctl net.ipv4.ip_forward
net.ipv4.ip_forward = 1


$ sysctl net.bridge.bridge-nf-call-iptables
net.bridge.bridge-nf-call-iptables = 1

info:

kubectl exec -it busybox ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
3: eth0@if7: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue
    link/ether 0a:58:0a:f4:02:05 brd ff:ff:ff:ff:ff:ff
    inet 10.244.2.5/24 scope global eth0
       valid_lft forever preferred_lft forever

Iptables:

vagrant@kube1:~$ sudo iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination
KUBE-EXTERNAL-SERVICES  all  --  anywhere             anywhere             ctstate NEW /* kubernetes externally-visible service portals */
KUBE-FIREWALL  all  --  anywhere             anywhere

Chain FORWARD (policy DROP)
target     prot opt source               destination
KUBE-FORWARD  all  --  anywhere             anywhere             /* kubernetes forwarding rules */
DOCKER-USER  all  --  anywhere             anywhere
DOCKER-ISOLATION-STAGE-1  all  --  anywhere             anywhere
ACCEPT     all  --  anywhere             anywhere             ctstate RELATED,ESTABLISHED
DOCKER     all  --  anywhere             anywhere
ACCEPT     all  --  anywhere             anywhere
ACCEPT     all  --  anywhere             anywhere
ACCEPT     all  --  kube1/16             anywhere
ACCEPT     all  --  anywhere             kube1/16

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination
KUBE-SERVICES  all  --  anywhere             anywhere             ctstate NEW /* kubernetes service portals */
KUBE-FIREWALL  all  --  anywhere             anywhere

Chain DOCKER (1 references)
target     prot opt source               destination

Chain DOCKER-ISOLATION-STAGE-1 (1 references)
target     prot opt source               destination
DOCKER-ISOLATION-STAGE-2  all  --  anywhere             anywhere
RETURN     all  --  anywhere             anywhere

Chain DOCKER-ISOLATION-STAGE-2 (1 references)
target     prot opt source               destination
DROP       all  --  anywhere             anywhere
RETURN     all  --  anywhere             anywhere

Chain DOCKER-USER (1 references)
target     prot opt source               destination
RETURN     all  --  anywhere             anywhere

Chain KUBE-EXTERNAL-SERVICES (1 references)
target     prot opt source               destination

Chain KUBE-FIREWALL (2 references)
target     prot opt source               destination
DROP       all  --  anywhere             anywhere             /* kubernetes firewall for dropping marked packets */ mark match 0x8000/0x8000

Chain KUBE-FORWARD (1 references)
target     prot opt source               destination
ACCEPT     all  --  anywhere             anywhere             /* kubernetes forwarding rules */ mark match 0x4000/0x4000
ACCEPT     all  --  kube1/16             anywhere             /* kubernetes forwarding conntrack pod source rule */ ctstate RELATED,ESTABLISHED
ACCEPT     all  --  anywhere             kube1/16             /* kubernetes forwarding conntrack pod destination rule */ ctstate RELATED,ESTABLISHED

Chain KUBE-SERVICES (1 references)
target     prot opt source               destination
-- DenCowboy
docker
flannel
kubernetes
pod

2 Answers

2/6/2019

most likely the problem could be with iptables.

run the below command and check

iptables -P FORWARD ACCEPT

-- P Ekambaram
Source: StackOverflow

2/6/2019

How can I debug this? What can be wrong? (All firewalls are turned off/disabled)

This may be the problem if you have disabled iptables on your nodes. The overlay (flannel) sets up iptables to allow pod to pod traffic. You can check on your K8s nodes with something like this:

iptables-save  | grep 10.244.2.4
iptables-save  | grep 10.244.2.5
iptables-save  | grep 10.244.2.6

You should see some rules like this for port 80:

-A KUBE-SEP-XXXXXXXXXXXXXXXX -s 10.244.2.4/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-XXXXXXXXXXXXXXXX -p tcp -m tcp -j DNAT --to-destination 10.244.2.4:80
-- Rico
Source: StackOverflow