Service not available in Kubernetes

5/2/2017

I have a minikube cluster running locally (v0.17.1), with two deployments: one is a Redis instance and one is a custom app that is trying to connect to the Redis instance. My configuration is more or less copy/pasted from the official docs and the Kubernetes guestbook example.

Service definition and deployment:

apiVersion: v1
kind: Service
metadata:
  name: poller-redis
  labels:
    app: poller-redis
    tier: backend
    role: database
    target: poller
spec:
  selector:
    app: poller
    tier: backend
    role: service
  ports:
  - port: 6379
    targetPort: 6379

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: poller-redis
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: poller-redis
        tier: backend
        role: database
        target: poller
    spec:
      containers:
      - name: poller-redis
        image: gcr.io/jmen-1266/jmen-redis:a67b5f4bfd8ea8441ed66a8fcb6596f276017a1c
        ports:
        - containerPort: 6379
        env:
        - name: GET_HOSTS_FROM
          value: dns
      imagePullSecrets:
      - name: gcr-json-key

App deployment:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: poller
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: poller
        tier: backend
        role: service
    spec:
      containers:
      - name: poller
        image: gcr.io/jmen-1266/poller:a96a452292e894e46339309cc024cac67647cc25
        imagePullPolicy: Always
      imagePullSecrets:
      - name: gcr-json-key

Relevant (I hope) Kubernetes info:

$ kubectl get services
NAME           CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE
kubernetes     10.0.0.1     <none>        443/TCP    24d
poller-redis   10.0.0.137   <none>        6379/TCP   20d

$ kubectl get deployments
NAME           DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
poller         1         1         1            1           12d
poller-redis   1         1         1            1           4d

$ kubectl get endpoints
NAME           ENDPOINTS           AGE
kubernetes     10.0.2.15:8443      24d
poller-redis   172.17.0.7:6379     20d

Inside the poller pod (custom app), I get environment variables created for Redis:

# env | grep REDIS
POLLER_REDIS_SERVICE_HOST=10.0.0.137
POLLER_REDIS_SERVICE_PORT=6379
POLLER_REDIS_PORT=tcp://10.0.0.137:6379
POLLER_REDIS_PORT_6379_TCP_ADDR=10.0.0.137
POLLER_REDIS_PORT_6379_TCP_PORT=6379
POLLER_REDIS_PORT_6379_TCP_PROTO=tcp
POLLER_REDIS_PORT_6379_TCP=tcp://10.0.0.137:6379

However, if I try to connect to that port, I cannot. Doing something like:

nc -vz poller-redis 6379

fails.

What I have noticed is that I cannot access the Redis service via its ClusterIP but I can via the IP of the pod running Redis.

Any ideas, please?

-- cgf
kubernetes
minikube

3 Answers

5/11/2017

One kube-dns service running in kube-system is enough. Did you run nc -vz poller-redis 6379 in pods which have same namespace as redis service? poller-redis is simplified dns name of resdis service in same namespace. It will do not work in different namespace. Since kube-dns is unavailable on nodes. So if you want to run nc or redisclient in nodes, please use clusterIP of redis service to replace dns name.

-- luke
Source: StackOverflow

5/3/2017

It could be related to kube-dns possibly not running.

From inside the poller pod can you verify that poller-redis resolves?

Does the following work from inside the container?

nc -v 10.0.0.137

-- Ryan Gifford
Source: StackOverflow

5/27/2017

Figured this out in the end, it looks like I misunderstood how the service selectors work in Kubernetes.

I have posted that my service definition is:

apiVersion: v1
kind: Service
metadata:
  name: poller-redis
  labels:
    app: poller-redis
    tier: backend
    role: database
    target: poller
spec:
  selector:
    app: poller
    tier: backend
    role: service
  ports:
  - port: 6379
    targetPort: 6379

The problem is that metadata.labels and spec.selector are different, when they should actually be the same. I still do not exactly understand why this is the case judging by the Kubernetes documentation, but there you have it. Now my service definition looks like:

apiVersion: v1
kind: Service
metadata:
  name: poller-redis
  labels:
    app: poller-redis
    tier: backend
    role: database
    target: poller
spec:
  selector:
    app: poller-redis
    tier: backend
    role: database
    target: poller
  ports:
  - port: 6379
    targetPort: 6379

I also now use straight up DNS lookup (i.e. ping poller-redis) rather than trying to connect to localhost:6379 from my target pods.

-- cgf
Source: StackOverflow