kube-dns Failed to list *v1.Endpoints getsockopt: connection refused

4/23/2018

I have a kubernetes cluster (v1.10) using flannel (not sure if relevant, might be) as CNI provider. Trying to apply kube-dns but it goes to CrashLoopBackOff and the logs for the kubedns pod show, repeatedly:

I0423 17:46:47.045712       1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0423 17:46:47.545729       1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0423 17:46:48.045723       1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0423 17:46:48.545749       1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
E0423 17:46:49.019286       1 reflector.go:201] k8s.io/dns/pkg/dns/dns.go:147: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?resourceVersion=0: dial tcp 10.96.0.1:443: getsockopt: connection refused
E0423 17:46:49.019325       1 reflector.go:201] k8s.io/dns/pkg/dns/dns.go:150: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?resourceVersion=0: dial tcp 10.96.0.1:443: getsockopt: connection refused
I0423 17:46:49.045731       1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
F0423 17:46:49.545707       1 dns.go:167] Timeout waiting for initialization

Nothing in my kube-dns manifest refers to port 443 and kube-apiserver is configured for 6443. What is it trying to get a connection to that is being refused?

I also don't know whether it has anything to do with the kube-dns pod having an ip of 10.88.0.3:

kubectl -n kube-system -o wide get pods
NAME                      READY     STATUS             RESTARTS   AGE       IP            NODE
kube-dns-564f9d98-lt9js   2/3       CrashLoopBackOff   13         18m       10.88.0.3     worker1
kube-flannel-ds-5bqm6     1/1       Running            0          35m       10.240.0.12   controller2
kube-flannel-ds-djmld     1/1       Running            0          35m       10.240.0.11   controller1
kube-flannel-ds-nbfhp     1/1       Running            0          35m       10.240.0.23   worker3
kube-flannel-ds-prxdr     1/1       Running            0          35m       10.240.0.22   worker2
kube-flannel-ds-x9cdq     1/1       Running            0          35m       10.240.0.21   worker1
kube-flannel-ds-zjbgb     1/1       Running            0          35m       10.240.0.13   controller3

Again, where is this coming from? It's not something I have configured and it does not sit within either of my service network or pod network CIDR ranges:

kubernetes_dns_domain: kubernetes.local
kubernetes_dns_ip: "{{ kubernetes_cluster_subnet }}.10"
kubernetes_cluster_subnet: 10.96.0
kubernetes_pod_network_cidr: 10.244.0.0/16
kubernetes_service_ip: "{{ kubernetes_cluster_subnet }}.1"
kubernetes_service_ip_range: "{{ kubernetes_cluster_subnet }}.0/24"
kubernetes_service_node_port_range: 30000-32767
kubernetes_secure_port: 6443

I'm thoroughly confused and would be grateful of any explanations as to what is going on.

kube_dns_version: 1.14.10 flannel_version: v0.10.0

-- amb85
kube-dns
kubernetes

0 Answers