Kubernetes does not resolve Pod from a different namespace without FQDN

3/8/2020

Let's say we have two namespaces namespace-a and namespace-b.

Pod pod-name running in a Deployment and exposed internally as a Service service-name via ClusterIP on namespace-a. The Kubernetes 1.17 cluster has a cluster domain name cluster-domain. The cluster-domain is not the default cluster.local.

Another Pod batman on namespace-b attempts to resolve the IP address of pod-name.

  1. The following works from batman: ping/telnet pod-name.service-name.namespace-a.svc.cluster-domain
  2. The following does not work from batman: ping/telnet pod-name.service-name.namespace-a.svc

However, if batman is running on namespace-a: 3. The following does work from batman: ping/telnet pod-name.service-name.namespace-a.svc

Is this related to DNS configuration? Is this how it supposed to work? I could not find any material specificly about this issue.

-- Dyin
dns
kubernetes
pod
service

2 Answers

3/20/2020

As far as I'm aware this is how it's supposed to be working.

I can recommend reading Debugging DNS Resolution, where you can find Are DNS queries being received/processed

You can verify if queries are being received by CoreDNS by adding the log plugin to the CoreDNS configuration (aka Corefile). The CoreDNS Corefile is held in a ConfigMap named coredns. To edit it, use the command …

kubectl -n kube-system edit configmap coredns

Then add log in the Corefile section per the example below.

apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
data:
  Corefile: |
    .:53 {
        log
        errors
        health
        kubernetes cluster.local in-addr.arpa ip6.arpa {
          pods insecure
          upstream
          fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        proxy . /etc/resolv.conf
        cache 30
        loop
        reload
        loadbalance
    }

After saving the changes, it may take up to minute or two for Kubernetes to propagate these changes to the CoreDNS pods.

Next, make some queries and view the logs per the sections above in this document. If CoreDNS pods are receiving the queries, you should see them in the logs.

You can also check DNS for Services and Pods and Customizing DNS Service.

-- Crou
Source: StackOverflow

3/8/2020

If you want to have cluster.domain instead of the default cluster.local you need to configure the local domain in the kubelet with the flag --cluster-domain=cluster.domain

Also you need to modify the ConfigMap for the CoreDNS Corefile to change the default

apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
data:
  Corefile: |
    .:53 {
        errors
        health {
            lameduck 5s
        }
        ready
        kubernetes cluster.domain in-addr.arpa ip6.arpa {
            pods insecure
            fallthrough in-addr.arpa ip6.arpa
            ttl 30
        }
        prometheus :9153
        forward . /etc/resolv.conf
        cache 30
        loop
        reload
        loadbalance
    }

To verify you check the /etc/resolve.conf file inside pod

search default.svc.cluster.domain svc.cluster.domain cluster.domain google.internal c.gce_project_id.internal
nameserver 10.0.0.10
options ndots:5
-- Arghya Sadhu
Source: StackOverflow