I want to understand how the /etc/resolve.conf
is configured for each pod of a replicaSet for DNS server info, because I upgraded the cluster from 1.13 to 1.14 and it somehow changed the IP of the kube DNS server , and the existing replicaSet is injecting the old IP/info into the /etc/resolve.conf
of new pods of that replicaSet, breaking the service discovery for those particular pods.
Even if it is not a cluster upgrade and one just re-deploy kube-dns, and it changes its IP, then how existing replicaSet or statefulSets behave when they add/increase more pods. On my side, currently, it's injecting the old info.
New deployments work fine.
The pods will always inherit the specification from the owner object, in this case, from the ReplicaSet
.
Each ReplicaSet
has a pod template that contains the resolv.conf registered configuration and won't change unless a new Deployment
is rolled out (since RSs are dependents of the Deployment
object):
kubectl get rs YOUR_REPLICASET -o yaml | grep pod-template-hash -m 1
In this case you can either rollout a new Deployment
version or redeploy the kube-dns/coreDNS' YAML definition, keeping the original clusterIP
:
spec:
clusterIP: 10.11.0.12 # Old service address