I am trying to enable DNS for my pods with network policy. I am using https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/
When DNS works:
nslookup kubernetes.default
Server: 100.64.0.10
Address: 100.64.0.10#53
Name: kubernetes.default.svc.cluster.local
Address: 100.64.0.1
with network policy
/ # nslookup kubernetes.default
;; connection timed out; no servers could be reached
I tried with
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: my_name
namespace: my_namespace
spec:
podSelector:
matchLabels: {}
policyTypes:
- Egress
egress:
- to:
- namespaceSelector:
matchLabels:
name: kube-system
ports:
- protocol: UDP
port: 53
- protocol: TCP
port: 53
or
egress:
- to:
- namespaceSelector:{}
podSelector: {}
ports:
- protocol: UDP
port: 53
- protocol: TCP
port: 53
or
egress:
- ports:
- protocol: UDP
port: 53
- protocol: TCP
port: 53
None of them works. Only thing I tried which works, is following:
egress:
- to:
- namespaceSelector:{}
podSelector: {}
But it opens all egress.
I tried those combination in my local k8s (minikube with cilium). All of them work as expected, but not in the production environment(AWS k8s 1.20 with calico ). I always have the DNS issue. From the tcpdump, I am sure the DNS is using port 53 with UDP.
I run out of ideas, please help~
the port is overwriten by the dns service to 8053. the tcpdumper is running inside the pod, so it does not know it is re-routed .
I had same issue and adding following worked fine:
egress:
- ports:
- port: 53
- protocol: UDP