For pod-to-pod communication, what IP should be used? The service's ClusterIP or the endpoint

12/28/2018

I've deployed the Rancher Helm chart to my Kubernetes cluster and want to access the Rancher API/UI from another pod (i.e. a pod running an ingress-controller).

when I list the services and the endpoints. The IP addresses differ:

$ kubectl get ep | grep rancher
release-name-rancher                         10.200.23.13:80                    18h

and

$ kubectl get services | grep rancher 
release-name-rancher                         ClusterIP   10.100.200.253   <none>        80/TCP                       18h

Within the container of the client (i.e. the ingress controller), I see the service beeing represented with the service's ClusterIP:

$ env | grep RELEASE_NAME_RANCHER_SERVICE_HOST
RELEASE_NAME_RANCHER_SERVICE_HOST=10.100.200.253

Trying to reach the backend via the IP address in the Env does not work (curl 10.100.200.253 just delivers no response and blocks forever).

Trying to reach the backend via the endpoint address works:

$ curl 10.200.23.13
<a href="https://10.200.23.13/">Found</a>.

I'm quite confused why the endpoint IP address and the ClusterIP address differ and why is it not possible to connect to the ClusterIP address. Any hints to polish my understanding?

-- Oliver Wolf
kubernetes

1 Answer

12/28/2018

In Kubernetes, every Pod and Service gets its own IP address. The kubectl get services IP address is the Kubernetes-internal address of the Service; the get ep address address of the Pod behind it. The Service actually acts like a load balancer, and there can be multiple Pods attached to it. The Kubernetes Service documentation goes into a lot of detail about what exactly is happening here.

Kubernetes also provides an internal DNS service that can resolve Service names. You generally shouldn't use any of these IP addresses directly; instead, use the host name release-name-rancher.default.svc.cluster.local (or replace "default" if you're running in some other Kubernetes namespace).

While the ..._SERVICE_HOST environment variable you reference is supported and documented, I'd avoid using it. Of particular note, if you helm install or kubectl apply a large set of resources at once and the Pod gets created before the Service, you'll be in a consistent state except that the Pod won't actually have this environment variable. In a Helm land where Services don't have fixed names, the environment variable name won't be fixed either. Prefer the DNS name.

-- David Maze
Source: StackOverflow