I have two nodes, master and slave. They actually working well except one thing. The master node unable to discover service name. The slave nodes are working perfect without issue.
I have installed the dnstools docker on both nodes. Their /etc/resolv.conf files are exactly same.
root@pvgl50934100b:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
my-master Ready master 84d v1.13.2
pvgl50934100b Ready <none> 17h v1.13.2
root@pvgl50934100b:~# kubectl get pods -o wide | grep dnstools
dnstools-5c57c4d457-695hs 1/1 Running 16 16h 10.244.12.13 pvgl50934100b <none> <none>
dnstools-5c57c4d457-fvhts 1/1 Running 15 15h 10.244.0.125 my-master <none> <none>
root@pvgl50934100b:~# kubectl exec dnstools-5c57c4d457-695hs -- cat /etc/resolv.conf
nameserver 10.244.0.10
search default.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
root@pvgl50934100b:~# kubectl exec dnstools-5c57c4d457-fvhts -- cat /etc/resolv.conf
nameserver 10.244.0.10
search default.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
My endpoint configuration:
- apiVersion: v1
kind: Endpoints
metadata:
creationTimestamp: "2019-01-24T10:05:42Z"
labels:
addonmanager.kubernetes.io/mode: Reconcile
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: CoreDNS
name: kube-dns
namespace: kube-system
resourceVersion: "9725462"
selfLink: /api/v1/namespaces/kube-system/endpoints/kube-dns
uid: 9bc1c68d-1fbf-11e9-a68f-4ccc6a74038f
subsets:
- addresses:
- ip: 10.244.0.124
nodeName: my-master
targetRef:
kind: Pod
name: coredns-779bd65884-8m4j4
namespace: kube-system
resourceVersion: "9725461"
uid: f2fda560-5ab1-11e9-a68f-4ccc6a74038f
- ip: 10.244.12.15
nodeName: pvgl50934100b
targetRef:
kind: Pod
name: coredns-779bd65884-488s2
namespace: kube-system
resourceVersion: "9725429"
uid: eedbd034-5ab1-11e9-a68f-4ccc6a74038f
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP
My service:
kube-dns NodePort 10.244.0.10 <none> 53:30765/UDP,53:30765/TCP 75d
The testing result by dig:
root@pvgl50934100b:~# kubectl exec dnstools-5c57c4d457-695hs -- dig -t Mx kubernetes
; <<>> DiG 9.11.3 <<>> -t Mx kubernetes
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 12552
;; flags: qr rd ra ad; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1452
;; QUESTION SECTION:
;kubernetes. IN MX
;; AUTHORITY SECTION:
. 30 IN SOA a.root-servers.net. nstld.verisign-grs.com. 2019040903 1800 900 604800 86400
;; Query time: 137 msec
;; SERVER: 10.244.0.10#53(10.244.0.10)
;; WHEN: Wed Apr 10 02:02:17 UTC 2019
;; MSG SIZE rcvd: 114
root@pvgl50934100b:~# kubectl exec dnstools-5c57c4d457-fvhts -- dig -t Mx kubernetes
; <<>> DiG 9.11.3 <<>> -t Mx kubernetes
;; global options: +cmd
;; connection timed out; no servers could be reached
command terminated with exit code 9
I am not sure what else tools or place I can use to check the configuration. Expecting the master node is getting same result as slave.
Any suggestions / advice will appreciate.
I am no sure how it comes, However I re-init the master again and everything works fine, as long as two coredns run on two different nodes. Both can use the dig and return correct response.