Kubernetes on centos7 in Vagrant can not access ClusterIP

3/15/2017

Environment

Linux Version: Linux k8smaster 3.10.0-514.2.2.el7.x86_64 #1 SMP Tue Dec 6 23:06:41 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux

Vagrant Version: Installed Version: 1.9.1

Kubernetes Version: 1.5.2

Flannel Version: 0.7.0

Kubernetes cluster

I used vagrant to create three Centos virtual machines and specify private ip.(eth0 is default nat network)

k8smaster: eth0:10.0.2.15   eth1:192.168.1.100
node01:    eth0:10.0.2.15   eth1:192.168.1.1
node02:    eth0:10.0.2.15   eth1:192.168.1.2

Install kubernetes with yum install

On master: yum install kubernetes-master
On node:   yum install kubernetes-node

Configure the cluster to make the cluster available

[root@k8smaster ~]# kubectl get nodes
 NAME      STATUS    AGE
 node01    Ready     1d
 node02    Ready     1d

[root@k8smaster ~]# kubectl get pods -n kube-system
 NAME                                    READY     STATUS    RESTARTS   AGE
 heapster-1765662453-rzlhp               1/1       Running   0          20h
 kube-dns-4264603877-wd7nq               4/4       Running   4          23h
 kubernetes-dashboard-2405669852-7svw7   1/1       Running   0          19h
 monitoring-grafana-3730655072-4z3b8     1/1       Running   0          20h
 monitoring-influxdb-957705310-tvcqr     1/1       Running   0          20h

Now ,i will create a webapp for test ClusterIP

[root@k8smaster ~]# kubectl get pods -o wide
 NAME           READY     STATUS    RESTARTS   AGE       IP          NODE
 webapp-61skd   1/1       Running   0          19h       10.1.27.3   node02
 webapp-swqxg   1/1       Running   0          19h       10.1.21.5   node01
[root@k8smaster ~]# kubectl get svc
 NAME         CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
 kubernetes   10.254.0.1      <none>        443/TCP    1d
 webapp       10.254.247.62   <none>        8081/TCP   18h

Up to now,everything looks normal!!!

Now, I visit ClusterIP(10.254.247.62:8081) on node01,sometimes accessible,and sometimes can not be accessed!!! the same problem happend on node02.

strange problem

I use wireshark to Capturing packets on flannel0 and check the container's access_log in pod webapp.

when accessible:

The wireshark: (no context)

The access_log: 10.0.2.15 - - [15/Mar/2017:04:27:45 +0000] "GET / HTTP/1.1" 200 11250

When can not be accessed:

The wireshark: 2081 3222.487736215    10.0.2.15 -> 10.1.27.3    TCP 60 57184          > http-alt [SYN] Seq=0 Win=29200 Len=0 MSS=1460 SACK_PERM=1 TSval=90394425 TSecr=0     WS=128

the access_log: (no context)

Check iptables

[root@node02 ~]# iptables -t nat -S |grep webapp
 -A KUBE-SEP-GK5MEJWLZBFFJJ45 -s 10.1.21.5/32 -m comment --comment "default/webapp:" -j KUBE-MARK-MASQ
 -A KUBE-SEP-GK5MEJWLZBFFJJ45 -p tcp -m comment --comment "default/webapp:" -m tcp -j DNAT --to-destination 10.1.21.5:8080
 -A KUBE-SEP-V6PLSL5CQOUVXPSD -s 10.1.27.3/32 -m comment --comment "default/webapp:" -j KUBE-MARK-MASQ
 -A KUBE-SEP-V6PLSL5CQOUVXPSD -p tcp -m comment --comment "default/webapp:" -m tcp -j DNAT --to-destination 10.1.27.3:8080
 -A KUBE-SERVICES -d 10.254.247.62/32 -p tcp -m comment --comment "default/webapp: cluster IP" -m tcp --dport 8081 -j KUBE-SVC-BL7FHTIPVYJBLWZN
 -A KUBE-SVC-BL7FHTIPVYJBLWZN -m comment --comment "default/webapp:" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-GK5MEJWLZBFFJJ45
 -A KUBE-SVC-BL7FHTIPVYJBLWZN -m comment --comment "default/webapp:" -j KUBE-SEP-V6PLSL5CQOUVXPSD

It seems normal!!!

I really do not know what's going on,it makes me a headache!!!anybody has a good solution for me ?

-- Kevin Guo
kubernetes

2 Answers

3/15/2017

Uday,Thanks for answering my question.I did as you said ,but ,it's not work.just turn 10.0.2.15(my node eth0 ip address ) into 192.168.1.x(my node eth1 ip address) you can see my route table

[root@node01 kubernetes]# route -n
 Kernel IP routing table
 Destination     Gateway         Genmask         Flags Metric Ref    Use  Iface
 0.0.0.0         10.0.2.2        0.0.0.0         UG    100    0        0 eth0
 10.0.2.0        0.0.0.0         255.255.255.0   U     100    0        0 eth0
 10.1.0.0        0.0.0.0         255.255.0.0     U     0      0        0 flannel0
 10.1.21.0       0.0.0.0         255.255.255.0   U     0      0        0 docker0
 10.254.5.182    192.168.1.100   255.255.255.255 UGH   0      0        0 eth1
 169.254.0.0     0.0.0.0         255.255.0.0     U     1003   0        0 eth1
 192.168.1.0     0.0.0.0         255.255.255.0   U     0      0        0 eth1

ok, when i add the following route list ,it works ,but i do not know why ,and i have added a route for export to flannel0,not work !!!

route add ClusterIP dev docker0

Is not all the packages have been flannel0 interface out of it ?

-- Kevin Guo
Source: StackOverflow

3/15/2017

Kevin, the problem looks more of a route issue. Try running the following command and see if it helps.

sudo route add <kubernetes-clusterip> gw <kube-master-ip>

on your minions.

-- Uday Kiran
Source: StackOverflow