Coredns in Crashloopbackoff state with calico network

7/27/2020

I have a ubuntu 16.04 running in virtual box. I installed Kubernetes on it as a single node using kubeadm.

But coredns pods are in Crashloopbackoff state.

All other pods are running.

Single interface(enp0s3) - Bridge Network

Applied calico using kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

output on kubectl describe pod: 
 Type     Reason     Age                  From               Message
  ----     ------     ----                 ----               -------
  Normal   Scheduled  41m                  default-scheduler  Successfully assigned kube-system/coredns-66bff467f8-dxzq7 to kube
  Normal   Pulled     39m (x5 over 41m)    kubelet, kube      Container image "k8s.gcr.io/coredns:1.6.7" already present on machine
  Normal   Created    39m (x5 over 41m)    kubelet, kube      Created container coredns
  Normal   Started    39m (x5 over 41m)    kubelet, kube      Started container coredns
  Warning  BackOff    87s (x194 over 41m)  kubelet, kube      Back-off restarting failed container
-- Nitish Goel
coredns
kubernetes
project-calico

2 Answers

2/14/2021

Commented below line in /etc/resolv.conf (Host machine) and delete the coredns pods in kube-system namespace. <br> New pods came in running state :)

  • #nameserver 127.0.1.1
-- Dheeraj Kumar
Source: StackOverflow

7/30/2020

I did a kubectl logs <coredns-pod> and found error logs below and looked in the mentioned link As per suggestion, added resolv.conf = /etc/resolv.conf at the end of /etc/kubernetes/kubelet/conf.yamland recreated the pod.

kubectl logs coredns-66bff467f8-dxzq7 -n kube-system 
.:53 [INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7 CoreDNS-1.6.7 linux/amd64, go1.13.6, da7f65b [FATAL] plugin/loop: Loop (127.0.0.1:34536 -> :53) detected for zone ".", see coredns.io/plugins/loop#troubleshooting. Query: "HINFO 8322382447049308542.5528484581440387393." 
root@kube:/home/kube# 
-- Nitish Goel
Source: StackOverflow