I am experimenting with minikube for learning purposes, on a CentOS 7 Linux machine with Docker 18.06.010ce installed
I installed minikube using
minikube start --vm-driver=none"
I deployed a few applications but only to discover they couldn't talk to each other using their hostnames.
I deleted minikube using
minikube delete
I re-installed minikube using
minikube start --vm-driver=none
I then followed the instructions under "Debugging DNS Resolution" (https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/) but only to find out that the DNS system was not functional
More precisely, I run:
1.
kubectl create -f https://k8s.io/examples/admin/dns/busybox.yaml
2.
# kubectl exec -ti busybox -- nslookup kubernetes.default
Server: 10.96.0.10
Address 1: 10.96.0.10
nslookup: can't resolve 'kubernetes.default'
command terminated with exit code 1
3.
# kubectl exec busybox cat /etc/resolv.conf
nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local contabo.host
options ndots:5
4.
# kubectl get pods --namespace=kube-system -l k8s-app=kube-dns
NAME READY STATUS RESTARTS AGE
coredns-c4cffd6dc-dqtbt 1/1 Running 1 4m
kube-dns-86f4d74b45-tr8vc 2/3 Running 5 4m
surprisingly both kube-dns and coredns are running should this be a concern?
I have looked for a solution anywhere without success step 2 always returns error
I simply cannot accept that something so simple has become such a huge trouble for me Please assist
I managed to resolve the problem by re-installing Minikube after deleting all state files under /etc and /var/lib, but forgot to update.
This can be now closed.
After deleting /etc/kubernetes and /var/lib/kubelet and /var/lig/kubeadm.yaml and restarting minikube I can now successfully reproduce the DNS resolution debugging steps (https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/)
I bet some stale settings had persisted among minikube start/top iterations leading to inconsistent configuration.
It is also worth mentioning that DNS resolution was lost after restarting the iptables.
I suspect this is iptables rules related, some rule ie being put by minikube and as it gets lost as part of iptables restart the problem re-appears
Please note the output of kube-dns
pod below, it has only 2 of 3 containers running.
kube-dns-86f4d74b45-tr8vc 2/3 Running 5 4m
The last time I encountered this was when Docker's default FORWARD
policy was DROP
. Changing it to ACCEPT
using below fixed the problem for me.
iptables -P FORWARD ACCEPT
It might be other things too, please check the pod logs.
Mine is working with coredns enabled and kube-dns disabled.
C02W84XMHTD5:ucp iahmad$ minikube addons list
- addon-manager: enabled
- coredns: enabled
- dashboard: enabled
- default-storageclass: enabled
- efk: disabled
- freshpod: disabled
- heapster: disabled
- ingress: disabled
- kube-dns: disabled
- metrics-server: disabled
- nvidia-driver-installer: disabled
- nvidia-gpu-device-plugin: disabled
- registry: disabled
- registry-creds: disabled
- storage-provisioner: enabled
you may disable the kube-dns:
minikube addons disable kube-dns