i've juste set up a fresh cluster with kubeadm and kubernetes 1.21. All pods are mark ready. But i can't access any of them. After digging into the problem, it appear that no DNS resolution is possible. It seems that kube-proxy does not work.
this is a log of a kube-proxy pods
I0712 05:50:46.511967 1 node.go:172] Successfully retrieved node IP: x.x.x.x
I0712 05:50:46.512039 1 server_others.go:140] Detected node IP x.x.x.x
W0712 05:50:46.512077 1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
I0712 05:50:46.545626 1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
I0712 05:50:46.545672 1 server_others.go:212] Using iptables Proxier.
I0712 05:50:46.545692 1 server_others.go:219] creating dualStackProxier for iptables.
W0712 05:50:46.545715 1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
I0712 05:50:46.546089 1 server.go:643] Version: v1.21.2
I0712 05:50:46.549861 1 conntrack.go:52] Setting nf_conntrack_max to 196608
I0712 05:50:46.550300 1 config.go:224] Starting endpoint slice config controller
I0712 05:50:46.550338 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0712 05:50:46.550332 1 config.go:315] Starting service config controller
I0712 05:50:46.550354 1 shared_informer.go:240] Waiting for caches to sync for service config
W0712 05:50:46.553020 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0712 05:50:46.555115 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
I0712 05:50:46.650614 1 shared_informer.go:247] Caches are synced for service config
I0712 05:50:46.650634 1 shared_informer.go:247] Caches are synced for endpoint slice config
W0712 05:57:14.556916 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0712 06:06:34.558550 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
and this is my pods running :
kube-system pod/coredns-558bd4d5db-qpf5m 1/1 Running 1 8h
kube-system pod/coredns-558bd4d5db-r5jwz 1/1 Running 0 8h
kube-system pod/etcd-master2 1/1 Running 3 20h
kube-system pod/kube-apiserver-master2 1/1 Running 3 20h
kube-system pod/kube-controller-manager-master2 1/1 Running 3 8h
kube-system pod/kube-flannel-ds-b7xrm 1/1 Running 0 8h
kube-system pod/kube-flannel-ds-hcn7f 1/1 Running 0 8h
kube-system pod/kube-flannel-ds-rx8j6 1/1 Running 1 8h
kube-system pod/kube-flannel-ds-wc2jc 1/1 Running 0 8h
kube-system pod/kube-proxy-48wmr 1/1 Running 0 25m
kube-system pod/kube-proxy-4gw8t 1/1 Running 0 25m
kube-system pod/kube-proxy-h9djp 1/1 Running 0 25m
kube-system pod/kube-proxy-r4k9t 1/1 Running 0 24m
kube-system pod/kube-scheduler-master2 1/1 Running 3 20h
the command kubectl run -it --rm --restart=Never busybox --image=gcr.io/google-containers/busybox nslookup kubernetes.default
give me :
Address 1: x.x.x.x
nslookup: can't resolve 'kubernetes.default'
pod "busybox" deleted
pod default/busybox terminated (Error)
My iptables rules :
Chain INPUT (policy ACCEPT)
target prot opt source destination
KUBE-NODEPORTS all -- anywhere anywhere /* kubernetes health check service ports */
KUBE-EXTERNAL-SERVICES all -- anywhere anywhere ctstate NEW /* kubernetes externally-visible service portals */
KUBE-FIREWALL all -- anywhere anywhere
Chain FORWARD (policy ACCEPT)
target prot opt source destination
KUBE-FORWARD all -- anywhere anywhere /* kubernetes forwarding rules */
KUBE-SERVICES all -- anywhere anywhere ctstate NEW /* kubernetes service portals */
KUBE-EXTERNAL-SERVICES all -- anywhere anywhere ctstate NEW /* kubernetes externally-visible service portals */
DOCKER-USER all -- anywhere anywhere
DOCKER-ISOLATION-STAGE-1 all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT all -- 10.244.0.0/16 anywhere
ACCEPT all -- anywhere 10.244.0.0/16
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
KUBE-SERVICES all -- anywhere anywhere ctstate NEW /* kubernetes service portals */
KUBE-FIREWALL all -- anywhere anywhere
Chain DOCKER (1 references)
target prot opt source destination
Chain DOCKER-ISOLATION-STAGE-1 (1 references)
target prot opt source destination
DOCKER-ISOLATION-STAGE-2 all -- anywhere anywhere
RETURN all -- anywhere anywhere
Chain DOCKER-ISOLATION-STAGE-2 (1 references)
target prot opt source destination
DROP all -- anywhere anywhere
RETURN all -- anywhere anywhere
Chain DOCKER-USER (1 references)
target prot opt source destination
RETURN all -- anywhere anywhere
Chain KUBE-EXTERNAL-SERVICES (2 references)
target prot opt source destination
Chain KUBE-FIREWALL (2 references)
target prot opt source destination
DROP all -- anywhere anywhere /* kubernetes firewall for dropping marked packets */ mark match 0x8000/0x8000
DROP all -- !127.0.0.0/8 127.0.0.0/8 /* block incoming localnet connections */ ! ctstate RELATED,ESTABLISHED,DNAT
Chain KUBE-FORWARD (1 references)
target prot opt source destination
DROP all -- anywhere anywhere ctstate INVALID
ACCEPT all -- anywhere anywhere /* kubernetes forwarding rules */ mark match 0x4000/0x4000
ACCEPT all -- anywhere anywhere /* kubernetes forwarding conntrack pod source rule */ ctstate RELATED,ESTABLISHED
ACCEPT all -- anywhere anywhere /* kubernetes forwarding conntrack pod destination rule */ ctstate RELATED,ESTABLISHED
Chain KUBE-KUBELET-CANARY (0 references)
target prot opt source destination
Chain KUBE-NODEPORTS (1 references)
target prot opt source destination
Chain KUBE-PROXY-CANARY (0 references)
target prot opt source destination
Chain KUBE-SERVICES (2 references)
target prot opt source destination
any idea?
#kubectl edit cm -n kube-system kubelet-config-1.21
apiVersion: v1
data:
kubelet: |
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 0s
enabled: true
x509:
clientCAFile: /etc/kubernetes/pki/ca.crt
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 0s
cacheUnauthorizedTTL: 0s
cgroupDriver: systemd
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
#kubectl get svc -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 22h
Kube-proxy it's network service.
DNS-provider responsible for DNS resolution. As I see, you already have coredns installed.
Check your kubelet configuration. It should point to correct service, and this service should be accessible within your pods.
Also please check if your firewalld
or iptables
service is disabled on all nodes.
Like this:
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
anonymous:
enabled: false
webhook:
enabled: true
x509:
clientCAFile: "/var/lib/kubernetes/ca.pem"
authorization:
mode: Webhook
clusterDomain: "cluster.local"
clusterDNS:
- "10.33.0.10"
kubectl get svc -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.33.0.10 <none> 53/UDP,53/TCP,9153/TCP 35h
And then:
kubectl exec -ti net-diag-86589fd8f5-r28qq -- nslookup kubernetes.default
Server: 10.33.0.10
Address: 10.33.0.10#53
Name: kubernetes.default.svc.cluster.local
Address: 10.33.0.1
UPD.
I Just noticed that you have Docker as a container runtime and flannel as a network provider. Per my understanding the problem may be is that Docker messing around with your iptables rules, try to set all docker rules as prmissive and see if it'll work.
I'm not a big expert in iptables configuration but something like this may help:
https://unrouted.io/2017/08/15/docker-firewall/
Also if you are using Flannel, make sure that you are using correct iface
option. It may be critical if you are running non-cloud installations.
https://github.com/flannel-io/flannel/blob/master/Documentation/configuration.md#key-command-line-options