kubernetes ClusterIP service not able to route requests to containers on other nodes

1/8/2017

We have 3 physical machines installed with Kubernetes on Centos 7. We are using one machine as both master and worker and the other 2 machines are used as workers.

I have a service as defined below.

kubectl get service hostnames -o yaml
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: 2017-01-08T21:26:54Z
  name: hostnames
  namespace: default
  resourceVersion: "1209904"
  selfLink: /api/v1/namespaces/default/services/hostnames
  uid: 2d6b6ffe-d5e9-11e6-b2d8-842b2b55e882
spec:
  clusterIP: 10.254.241.39
  ports:
  - port: 80
    protocol: TCP
    targetPort: 9376
  selector:
    app: hostnames
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}

Trying to invoke the service is working only when the request is routed to the container that is physically on the same machine.

[root@server5 hostnames]# curl 10.254.241.39:80
^C
[root@server5 hostnames]# curl 10.254.241.39:80
hostnames-9ga5b
[root@server5 hostnames]# curl 10.254.241.39:80
hostnames-9ga5b
[root@server5 hostnames]# curl 10.254.241.39:80

The endpoints exist and invoking the endpoint IP Address directly works.

server5 hostnames]# curl 10.20.36.4:9376; curl 10.20.48.6:9376; curl 10.20.63.2:9376
hostnames-9ga5b
hostnames-ygxnk
hostnames-vcfql

The IP Tables rules are created by kube-proxy as shown below.

server5 hostnames]# iptables-save | grep hostnames
-A KUBE-SEP-3UQOVTFJM332BGMS -s 10.20.48.6/32 -m comment --comment "default/hostnames:" -j KUBE-MARK-MASQ
-A KUBE-SEP-3UQOVTFJM332BGMS -p tcp -m comment --comment "default/hostnames:" -m tcp -j DNAT --to-destination 10.20.48.6:9376
-A KUBE-SEP-6ZUKVGLXRG6BMNNI -s 10.20.63.2/32 -m comment --comment "default/hostnames:" -j KUBE-MARK-MASQ
-A KUBE-SEP-6ZUKVGLXRG6BMNNI -p tcp -m comment --comment "default/hostnames:" -m tcp -j DNAT --to-destination 10.20.63.2:9376
-A KUBE-SEP-UMK676VFQ5WVT4CI -s 10.20.36.4/32 -m comment --comment "default/hostnames:" -j KUBE-MARK-MASQ
-A KUBE-SEP-UMK676VFQ5WVT4CI -p tcp -m comment --comment "default/hostnames:" -m tcp -j DNAT --to-destination 10.20.36.4:9376
-A KUBE-SERVICES -d 10.254.241.39/32 -p tcp -m comment --comment "default/hostnames: cluster IP" -m tcp --dport 80 -j KUBE-SVC-NWV5X2332I4OT4T3
-A KUBE-SVC-NWV5X2332I4OT4T3 -m comment --comment "default/hostnames:" -m statistic --mode random --probability 0.33332999982 -j KUBE-SEP-UMK676VFQ5WVT4CI
-A KUBE-SVC-NWV5X2332I4OT4T3 -m comment --comment "default/hostnames:" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-3UQOVTFJM332BGMS
-A KUBE-SVC-NWV5X2332I4OT4T3 -m comment --comment "default/hostnames:" -j KUBE-SEP-6ZUKVGLXRG6BMNNI

Checking the logs of kube-proxy does not show any errors. Increase the logging level using -v 4.

We checked the behavior above on the other 2 machines and is identical (IP tables rules, being able to route the requests of the service only to local container, end points being reachable directly with the container IP address).

Is there a reason kubernetes service is not able to route the requests to containers running on other physical machines? The firewall on all machines is disabled and stopped.

Thanks.

-- Developer
kubernetes
networking
service

0 Answers