I tried to setup a kubernetes dns addon based on ansible repo: https://github.com/kubernetes/contrib/tree/master/ansible/roles/kubernetes-addons
After running the playbook, i can't find out neither dns pod nor service.!! After doing some lecture, (https://github.com/kubernetes/contrib/issues/886#issuecomment-216741889) it seems that i need to run the rc.yml and the svc.yml manually. that's what i did.
Unfortunately, the dns pod and service still up for a while and suddenly terminates.
I tried to checkout some logs before the pod goes down:
# kubectl logs kube-dns-v8-ujfqn --namespace=kube-system -c etcd
2016/11/21 13:05:04 etcd: listening for peers on http://localhost:2380
2016/11/21 13:05:04 etcd: listening for peers on http://localhost:7001
2016/11/21 13:05:04 etcd: listening for client requests on http://127.0.0.1:2379
2016/11/21 13:05:04 etcd: listening for client requests on http://127.0.0.1:4001
2016/11/21 13:05:04 etcdserver: datadir is valid for the 2.0.1 format
2016/11/21 13:05:04 etcdserver: name = default
2016/11/21 13:05:04 etcdserver: data dir = /var/etcd/data
2016/11/21 13:05:04 etcdserver: member dir = /var/etcd/data/member
2016/11/21 13:05:04 etcdserver: heartbeat = 100ms
2016/11/21 13:05:04 etcdserver: election = 1000ms
2016/11/21 13:05:04 etcdserver: snapshot count = 10000
2016/11/21 13:05:04 etcdserver: advertise client URLs = http://127.0.0.1:2379,http://127.0.0.1:4001
2016/11/21 13:05:04 etcdserver: initial advertise peer URLs = http://localhost:2380,http://localhost:7001
2016/11/21 13:05:04 etcdserver: initial cluster = default=http://localhost:2380,default=http://localhost:7001
2016/11/21 13:05:04 etcdserver: start member 6a5871dbdd12c17c in cluster f68652439e3f8f2a
2016/11/21 13:05:04 raft: 6a5871dbdd12c17c became follower at term 0
2016/11/21 13:05:04 raft: newRaft 6a5871dbdd12c17c [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
2016/11/21 13:05:04 raft: 6a5871dbdd12c17c became follower at term 1
2016/11/21 13:05:04 etcdserver: added local member 6a5871dbdd12c17c [http://localhost:2380 http://localhost:7001] to cluster f68652439e3f8f2a
2016/11/21 13:05:06 raft: 6a5871dbdd12c17c is starting a new election at term 1
2016/11/21 13:05:06 raft: 6a5871dbdd12c17c became candidate at term 2
2016/11/21 13:05:06 raft: 6a5871dbdd12c17c received vote from 6a5871dbdd12c17c at term 2
2016/11/21 13:05:06 raft: 6a5871dbdd12c17c became leader at term 2
2016/11/21 13:05:06 raft.node: 6a5871dbdd12c17c elected leader 6a5871dbdd12c17c at term 2
2016/11/21 13:05:06 etcdserver: published {Name:default ClientURLs:[http://127.0.0.1:2379 http://127.0.0.1:4001]} to cluster f68652439e3f8f2a
# kubectl logs kube-dns-v8-ujfqn --namespace=kube-system -c skydns
2016/11/21 13:07:14 skydns: falling back to default configuration, could not read from etcd: 100: Key not found (/skydns/config) [10]
2016/11/21 13:07:14 skydns: ready for queries on cluster.local. for tcp://0.0.0.0:53 [rcache 0]
2016/11/21 13:07:14 skydns: ready for queries on cluster.local. for udp://0.0.0.0:53 [rcache 0]
#kubectl logs kube-dns-v8-ujfqn --namespace=kube-system -c healthz
2016/11/21 13:05:58 Worker running nslookup kubernetes.default.svc.cluster.local localhost >/dev/null
2016/11/21 13:05:59 Client ip 12.16.64.1:45631 requesting /healthz probe servicing cmd nslookup kubernetes.default.svc.cluster.local localhost >/dev/null
2016/11/21 13:06:00 Worker running nslookup kubernetes.default.svc.cluster.local localhost >/dev/null
2016/11/21 13:06:02 Worker running nslookup kubernetes.default.svc.cluster.local localhost >/dev/null
2016/11/21 13:06:04 Worker running nslookup kubernetes.default.svc.cluster.local localhost >/dev/null
2016/11/21 13:06:06 Worker running nslookup kubernetes.default.svc.cluster.local localhost >/dev/null
2016/11/21 13:06:08 Worker running nslookup kubernetes.default.svc.cluster.local localhost >/dev/null
2016/11/21 13:06:08 Client ip 12.16.64.1:45652 requesting /healthz probe servicing cmd nslookup kubernetes.default.svc.cluster.local localhost >/dev/null
2016/11/21 13:06:10 Worker running nslookup kubernetes.default.svc.cluster.local localhost >/dev/null
2016/11/21 13:06:12 Worker running nslookup kubernetes.default.svc.cluster.local localhost >/dev/null
2016/11/21 13:06:14 Worker running nslookup kubernetes.default.svc.cluster.local localhost >/dev/null
2016/11/21 13:06:16 Worker running nslookup kubernetes.default.svc.cluster.local localhost >/dev/null
2016/11/21 13:06:18 Worker running nslookup kubernetes.default.svc.cluster.local localhost >/dev/null
2016/11/21 13:06:18 Client ip 12.16.64.1:45673 requesting /healthz probe servicing cmd nslookup kubernetes.default.svc.cluster.local localhost >/dev/null2016/11/21 13:06:20 Worker running nslookup kubernetes.default.svc.cluster.local localhost >/dev/null
2016/11/21 13:06:22 Worker running nslookup kubernetes.default.svc.cluster.local localhost >/dev/null
2016/11/21 13:06:24 Worker running nslookup kubernetes.default.svc.cluster.local localhost >/dev/null
2016/11/21 13:06:26 Worker running nslookup kubernetes.default.svc.cluster.local localhost >/dev/null
2016/11/21 13:06:28 Worker running nslookup kubernetes.default.svc.cluster.local localhost >/dev/null
2016/11/21 13:06:28 Client ip 12.16.64.1:45693 requesting /healthz probe servicing cmd nslookup kubernetes.default.svc.cluster.local localhost >/dev/null
Nov 23 10:09:26 ctc-cicd2 docker-current[25416]: I1123 07:09:26.213227 1 kube2sky.go:529] Using https://10.254.0.1:443 for kubernetes master
Nov 23 10:09:26 ctc-cicd2 docker-current[25416]: I1123 07:09:26.213279 1 kube2sky.go:530] Using kubernetes API <nil>
Nov 23 10:09:26 ctc-cicd2 docker-current[25416]: I1123 07:09:26.214181 1 kube2sky.go:598] Waiting for service: default/kubernetes
Nov 23 10:09:26 ctc-cicd2 docker-current[25416]: 2016/11/23 07:09:26 Worker running nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null
Nov 23 10:09:26 ctc-cicd2 docker-current[25416]: I1123 07:09:26.508032 1 kube2sky.go:660] Successfully added DNS record for Kubernetes service.
What was done wrong ??
What version of kubernetes and dns containers are you using? I see they are using v11. I had similar issues with v11 and am currently running kube-dns v19 for a big month without running into trouble.