"nslookup: read: Connection refused" from inside of a pod in Kubernetes (K8S) cluster (DNS problem)

4/10/2020

Problem

I have custom installation of k8s cluster with 1 master and 1 node on AWS ec2 based on Centos 7. It uses Core-DNS (pods running fine with no errors in logs) Inside of a node pod when calling e.g. nslookup google.com the output is nslookup: write to '10.96.0.10': Connection refused ;; connection timed out; no servers could be reached

For example, pinging inside of a pod ping 8.8.8.8 works fine:

PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: seq=0 ttl=50 time=1.330 ms

/etc/resolv.conf inside a pod it looks like:

nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local ec2.internal
options ndots:5

This command works fine from the node itself nslookup google.com:

Server:         172.31.0.2
Address:        172.31.0.2#53

Non-authoritative answer:
Name:   google.com
Address: 172.217.15.110
Name:   google.com
Address: 2607:f8b0:4004:801::200e

Kubelet config kubectl get configmap kubelet-config-1.17 -n kube-system -o yaml returns

data:
  kubelet: |
    apiVersion: kubelet.config.k8s.io/v1beta1
    authentication:
      anonymous:
        enabled: false
      webhook:
        cacheTTL: 0s
        enabled: true
      x509:
        clientCAFile: /etc/kubernetes/pki/ca.crt
    authorization:
      mode: Webhook
      webhook:
        cacheAuthorizedTTL: 0s
        cacheUnauthorizedTTL: 0s
    clusterDNS:
    - 10.96.0.10
    clusterDomain: cluster.local
    cpuManagerReconcilePeriod: 0s
    evictionPressureTransitionPeriod: 0s
    fileCheckFrequency: 0s
    healthzBindAddress: 127.0.0.1
    healthzPort: 10248
    httpCheckFrequency: 0s
    imageMinimumGCAge: 0s
    kind: KubeletConfiguration
    nodeStatusReportFrequency: 0s
    nodeStatusUpdateFrequency: 0s
    rotateCertificates: true
    runtimeRequestTimeout: 0s
    staticPodPath: /etc/kubernetes/manifests
    streamingConnectionIdleTimeout: 0s
    syncFrequency: 0s
    volumeStatsAggPeriod: 0s
kind: ConfigMap

Pods in kube namespace kubectl get pods -n kube-system look like this:

coredns-6955765f44-qdbgx                                1/1     Running   6          11d
coredns-6955765f44-r4v7n                                1/1     Running   6          11d
etcd-ip-172-31-42-121.ec2.internal                      1/1     Running   7          11d
kube-apiserver-ip-172-31-42-121.ec2.internal            1/1     Running   7          11d
kube-controller-manager-ip-172-31-42-121.ec2.internal   1/1     Running   6          11d
kube-proxy-lrpd9                                        1/1     Running   6          11d
kube-proxy-z55cv                                        1/1     Running   6          11d
kube-scheduler-ip-172-31-42-121.ec2.internal            1/1     Running   6          11d
weave-net-bdn5n                                         2/2     Running   0          39h
weave-net-z7mks                                         2/2     Running   5          39h

UPDATE

From the pod if I do ip route it returns:

default via 10.32.0.1 dev eth0 
10.32.0.0/12 dev eth0 scope link  src 10.32.0.16 

From master:

default via 172.31.32.1 dev eth0 
10.32.0.0/12 dev weave proto kernel scope link src 10.32.0.1 
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 
172.31.32.0/20 dev eth0 proto kernel scope link src 172.31.42.121 

From node:

default via 172.31.32.1 dev eth0 
10.32.0.0/12 dev weave proto kernel scope link src 10.32.0.1 
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 
172.31.32.0/20 dev eth0 proto kernel scope link src 172.31.46.62 

CoreDNS configmap kubectl -n kube-system get configmap coredns -oyaml is:

apiVersion: v1
data:
  Corefile: |
    .:53 {
        log
        errors
        health {
           lameduck 5s
        }
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
           pods insecure
           fallthrough in-addr.arpa ip6.arpa
           ttl 30
        }
        prometheus :9153
        forward . /etc/resolv.conf
        cache 30
        loop
        reload
        loadbalance
    }
kind: ConfigMap

So why nslookup google.com doesn't work inside of a pod??

-- Anton Abyzov
dns
kubernetes
nslookup
weave

1 Answer

4/13/2020

Installation of k8s cluster was wrong, ansible script should contain correct private IPs of master and nodes on ec2 vms.

dev-kubernetes-master ansible_host=34.233.207.xxx private_ip=172.31.37.xx
dev-kubernetes-slave ansible_host=52.6.10.xxx private_ip=172.31.42.xxx

I've reinstalled cluster with correct private ips specified (before there was no private ip at all) and the problem has gone.

-- Anton Abyzov
Source: StackOverflow