For some reason, nameserver such as 4.2.2.1,208.67.220.220 is not the best option in China. When the pod tries to resolve the domain outside the cluster, daemon set nodelocaldns
complains i/o timeout to resolve the domain name
[ERROR] plugin/errors: 2 checkpoint-api.hashicorp.com. A: read udp 192.168.1.15:35630->4.2.2.1:53: i/o timeout
[ERROR] plugin/errors: 2 checkpoint-api.hashicorp.com. AAAA: read udp 192.168.1.15:37137->4.2.2.2:53: i/o timeout
I modified the corefile of coredns in the configmap to use another nameserver 114.114.114.114, but without effect.
---
kind: ConfigMap
apiVersion: v1
metadata:
name: coredns
namespace: kube-system
selfLink: "/api/v1/namespaces/kube-system/configmaps/coredns"
uid: 844355d4-7dd3-11e9-ab0b-0800274131a7
resourceVersion: '919'
creationTimestamp: '2019-05-24T03:25:02Z'
labels:
addonmanager.kubernetes.io/mode: EnsureExists
annotations:
kubectl.kubernetes.io/last-applied-configuration: '{"apiVersion":"v1","data":{"Corefile":".:53
{\n errors\n health\n kubernetes cluster.local in-addr.arpa ip6.arpa
{\n pods insecure\n upstream /etc/resolv.conf\n fallthrough in-addr.arpa
ip6.arpa\n }\n prometheus :9153\n forward . /etc/resolv.conf\n cache
30\n loop\n reload\n loadbalance\n}\n"},"kind":"ConfigMap","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"coredns","namespace":"kube-system"}}
'
data:
Corefile: |
.:53 {
errors
health
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
upstream 114.114.114.114
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
forward . 114.114.114.114
cache 30
loop
reload
loadbalance
}
consul:53 {
errors
cache 30
forward . 10.233.5.74
}
So which configuration I have missed?
You can find the information here. More precisely:
To explicitly force all non-cluster DNS lookups to go through a specific nameserver at 172.16.0.1, point the proxy and upstream to the nameserver
proxy . 172.16.0.1
upstream 172.16.0.1