Kubernets DNS - Let service contact itself via DNS

3/24/2019

Pods in a kubernetes cluster can be reached by sending network requests to the dns of a service that they are a member of. Network requests have to be send to [service].[namespace].svc.cluster.local and get load balanced between all members of that service.

This works fine to let some pod reach another pod, but it fails if a pod tries to reach itself via a service that he's a member of. It always results in a timeout.

Is this a bug in Kubernetes (in my case minikube v0.35.0) or is some additional configuration required?


Here's some debug info:

Let's contact the service from some other pod. This works fine:

daemon@auth-796d88df99-twj2t:/opt/docker$ curl -v -X POST -H "Accept: application/json" --data '{}' http://message-service.message.svc.cluster.local:9000/message/get-messages
Note: Unnecessary use of -X or --request, POST is already inferred.
*   Trying 10.107.209.9...
* TCP_NODELAY set
* Connected to message-service.message.svc.cluster.local (10.107.209.9) port 9000 (#0)
> POST /message/get-messages HTTP/1.1
> Host: message-service.message.svc.cluster.local:9000
> User-Agent: curl/7.52.1
> Accept: application/json
> Content-Length: 2
> Content-Type: application/x-www-form-urlencoded
> 
* upload completely sent off: 2 out of 2 bytes
< HTTP/1.1 401 Unauthorized
< Referrer-Policy: origin-when-cross-origin, strict-origin-when-cross-origin
< X-Frame-Options: DENY
< X-XSS-Protection: 1; mode=block
< X-Content-Type-Options: nosniff
< Content-Security-Policy: default-src 'self'
< X-Permitted-Cross-Domain-Policies: master-only
< Date: Wed, 20 Mar 2019 04:36:51 GMT
< Content-Type: text/plain; charset=UTF-8
< Content-Length: 12
< 
* Curl_http_done: called premature == 0
* Connection #0 to host message-service.message.svc.cluster.local left intact
Unauthorized

Now we try to let the pod contact the service that he's a member of:

Note: Unnecessary use of -X or --request, POST is already inferred.
*   Trying 10.107.209.9...
* TCP_NODELAY set
* connect to 10.107.209.9 port 9000 failed: Connection timed out
* Failed to connect to message-service.message.svc.cluster.local port 9000: Connection timed out
* Closing connection 0
curl: (7) Failed to connect to message-service.message.svc.cluster.local port 9000: Connection timed out

If I've read the curl debug log correctly, the dns resolves to the ip address 10.107.209.9. The pod can be reached from any other pod via that ip but the pod cannot use it to reach itself.

Here are the network interfaces of the pod that tries to reach itself:

daemon@message-58466bbc45-lch9j:/opt/docker$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: sit0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
    link/sit 0.0.0.0 brd 0.0.0.0
296: eth0@if297: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ac:11:00:09 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.17.0.9/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever

Here is the kubernetes file deployed to minikube:

apiVersion: v1
kind: Namespace
metadata:
  name: message

---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  labels:
    app: message
  name: message
  namespace: message
spec:
  replicas: 1
  selector:
    matchLabels:
      app: message
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: message
    spec:
      containers:
        - name: message
          image: message-impl:0.1.0-SNAPSHOT
          imagePullPolicy: Never
          ports:
            - name: http
              containerPort: 9000
              protocol: TCP
          env:
            - name: NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace    
            - name: KAFKA_KUBERNETES_NAMESPACE
              value: kafka
            - name: KAFKA_KUBERNETES_SERVICE
              value: kafka-svc
            - name: CASSANDRA_KUBERNETES_NAMESPACE
              value: cassandra
            - name: CASSANDRA_KUBERNETES_SERVICE
              value: cassandra
            - name: CASSANDRA_KEYSPACE
              value: service_message
---

# Service for discovery
apiVersion: v1
kind: Service
metadata:
  name: message-service
  namespace: message
spec:
  ports:
    - port: 9000
      protocol: TCP
  selector:
    app: message
---

# Expose this service to the api gateway
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: message
  namespace: message
  annotations:
    kubernetes.io/ingress.class: traefik
spec:
  rules:
    - host: api.fload.cf
      http:
        paths:
          - path: /message
            backend:
              serviceName: message-service
              servicePort: 9000
-- Aki
dns
kube-proxy
kubernetes

1 Answer

5/13/2019

This is a known minikube issue. The discussion contains the following workarounds:

1) Try:

minikube ssh
sudo ip link set docker0 promisc on 

2) Use a headless service: clusterIP: None

-- BartoszKP
Source: StackOverflow