Following DNS troubleshooting instructions, service names resolve on the master node's pods, but not on slave node's pod. I have a 2-node kubeadm cluster setup on VirtualBox CentOS VM's with flannel.
from master:
kubectl exec -ti etcd-master -n kube-system -- nslookup kubernetes.default
Server: 192.168.1.1
Address 1: 192.168.1.1
Name: kubernetes.default
Address 1: 92.242.140.21 unallocated.barefruit.co.uk
from slave:
kubectl exec -ti busybox -- nslookup kubernetes.default
Server: 10.96.0.10
Address 1: 10.96.0.10
nslookup: can't resolve 'kubernetes.default'
command terminated with exit code 1
This issue is mentioned in a comment, by @P.J.Meisch, but no resolution since it wasn't the actual question.
The /etc/resolv.conf on each of the nodes (VM's) just has my host machine IP as the nameserver. is this wrong?
# Generated by NetworkManager
search fios-router.home
nameserver 192.168.1.1
Is flannel a bad choice for this set up?