Kubernetes pods cannot reach external ip addresses, but can access any other internal ip address inside the cluster. such as pod's ip, service's ip

11/30/2021

I can ping the IP of other pods in the cluster successfully, and I can also ping the node's IP of the pod's host node successfully, but I ping the other IP of the other nodes would fail, and I can't ping the other external IP successfully, for example 8.8.8.8

This is my k8s's cluster initial script:

kubeadm init --apiserver-advertise-address=192.168.3.75 --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers --kubernetes-version v1.21.1 --service-cidr=10.96.0.0/12 --pod-network-cidr=10.224.0.0/16

This is my nodes's summary:

kubectl get nodes

NAME        STATUS   ROLES                  AGE   VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE    KERNEL-VERSION          CONTAINER-RUNTIME
k8smaster   Ready    control-plane,master   18h   v1.21.1   192.168.3.75   <none>        Centos7     3.10.0-957.el7.x86_64   docker://20.10.6
k8snode1    Ready    <none>                 17h   v1.21.1   192.168.3.76   <none>        Centos7     3.10.0-957.el7.x86_64   docker://20.10.6
k8snode2    Ready    <none>                 17h   v1.21.1   192.168.3.77   <none>        Centos7     3.10.0-957.el7.x86_64   docker://20.10.6

This is my pods's summary:

kubectl get po -o wide

NAME               READY   STATUS    RESTARTS   AGE    IP           NODE       NOMINATED NODE   READINESS GATES
kubia-curl-zpxhf   1/1     Running   0          175m   10.224.1.6   k8snode1   <none>           <none>
kubia-nfhgn        1/1     Running   0          17h    10.224.2.2   k8snode2   <none>           <none>
kubia-rxk6p        1/1     Running   0          17h    10.224.1.2   k8snode1   <none>           <none>
kubia-sgqqm        1/1     Running   0          17h    10.224.2.3   k8snode2   <none>           <none>

Now, I execute the following command in pod/kubia-curl-zpxhf:

kubectl exec kubia-curl-zpxhf -- ping 10.224.1.2

64 bytes from 10.224.1.2: seq=0 ttl=64 time=0.285 ms
64 bytes from 10.224.1.2: seq=1 ttl=64 time=0.159 ms
64 bytes from 10.224.1.2: seq=2 ttl=64 time=0.100 ms
64 bytes from 10.224.1.2: seq=3 ttl=64 time=0.102 ms
64 bytes from 10.224.1.2: seq=4 ttl=64 time=0.102 ms

...

kubectl exec kubia-curl-zpxhf -- ping 192.168.3.76

64 bytes from 192.168.3.76: seq=0 ttl=64 time=0.079 ms
64 bytes from 192.168.3.76: seq=1 ttl=64 time=0.108 ms
64 bytes from 192.168.3.76: seq=2 ttl=64 time=0.098 ms
64 bytes from 192.168.3.76: seq=3 ttl=64 time=0.078 ms
64 bytes from 192.168.3.76: seq=4 ttl=64 time=0.120 ms

First, I execute the following command on 192.168.3.75 (ens33 is 192.168.3.75's network device)

tcpdump -i ens33 -nvvv icmp

Then, I execute

kubectl exec kubia-curl-zpxhf -- ping 192.168.3.75

it's failed:

PING 192.168.3.75 (192.168.3.75): 56 data bytes
^C
--- 192.168.3.75 ping statistics ---
14 packets transmitted, 0 packets received, 100% packet loss

The terminal output for 192.168.3.75 is as follows:

tcpdump: listening on ens33, link-type EN10MB (Ethernet), capture size 262144 bytes
01:30:10.372875 IP (tos 0x0, ttl 63, id 36303, offset 0, flags [DF], proto ICMP (1), length 84)
    10.224.1.6 > 192.168.3.75: ICMP echo request, id 182, seq 0, length 64
01:30:11.373482 IP (tos 0x0, ttl 63, id 37228, offset 0, flags [DF], proto ICMP (1), length 84)
    10.224.1.6 > 192.168.3.75: ICMP echo request, id 182, seq 1, length 64
01:30:12.373946 IP (tos 0x0, ttl 63, id 38204, offset 0, flags [DF], proto ICMP (1), length 84)
    10.224.1.6 > 192.168.3.75: ICMP echo request, id 182, seq 2, length 64
01:30:13.374539 IP (tos 0x0, ttl 63, id 38265, offset 0, flags [DF], proto ICMP (1), length 84)
    10.224.1.6 > 192.168.3.75: ICMP echo request, id 182, seq 3, length 64
01:30:14.375746 IP (tos 0x0, ttl 63, id 38515, offset 0, flags [DF], proto ICMP (1), length 84)
    10.224.1.6 > 192.168.3.75: ICMP echo request, id 182, seq 4, length 64
01:30:15.376032 IP (tos 0x0, ttl 63, id 38844, offset 0, flags [DF], proto ICMP (1), length 84)
    10.224.1.6 > 192.168.3.75: ICMP echo request, id 182, seq 5, length 64
01:30:16.376906 IP (tos 0x0, ttl 63, id 39092, offset 0, flags [DF], proto ICMP (1), length 84)
    10.224.1.6 > 192.168.3.75: ICMP echo request, id 182, seq 6, length 64
01:30:17.377851 IP (tos 0x0, ttl 63, id 39705, offset 0, flags [DF], proto ICMP (1), length 84)
    10.224.1.6 > 192.168.3.75: ICMP echo request, id 182, seq 7, length 64
01:30:18.378242 IP (tos 0x0, ttl 63, id 40451, offset 0, flags [DF], proto ICMP (1), length 84)
    10.224.1.6 > 192.168.3.75: ICMP echo request, id 182, seq 8, length 64
01:30:19.379295 IP (tos 0x0, ttl 63, id 40842, offset 0, flags [DF], proto ICMP (1), length 84)
    10.224.1.6 > 192.168.3.75: ICMP echo request, id 182, seq 9, length 64
01:30:20.380495 IP (tos 0x0, ttl 63, id 41260, offset 0, flags [DF], proto ICMP (1), length 84)
    10.224.1.6 > 192.168.3.75: ICMP echo request, id 182, seq 10, length 64
01:30:21.380822 IP (tos 0x0, ttl 63, id 41316, offset 0, flags [DF], proto ICMP (1), length 84)
    10.224.1.6 > 192.168.3.75: ICMP echo request, id 182, seq 11, length 64

Due to the above problem, my service is not working fine:

apiVersion: v1
kind: Service
metadata:
  name: external-service
spec:
  type: ExternalName
  externalName: www.google.com
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80

When I use this Service, it will find my CoreDns for domain resolution. But the DNS server is an external ip. So, this service will not work correctly.

iptables-save

# Generated by iptables-save v1.4.21 on Mon Nov 29 01:48:41 2021
*mangle
:PREROUTING ACCEPT [227200:138557221]
:INPUT ACCEPT [220885:138133792]
:FORWARD ACCEPT [6330:424329]
:OUTPUT ACCEPT [190532:21074922]
:POSTROUTING ACCEPT [195943:21444255]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-PROXY-CANARY - [0:0]
COMMIT
# Completed on Mon Nov 29 01:48:41 2021
# Generated by iptables-save v1.4.21 on Mon Nov 29 01:48:41 2021
*filter
:INPUT ACCEPT [4649:2421326]
:FORWARD ACCEPT [35:2940]
:OUTPUT ACCEPT [4061:456426]
:DOCKER - [0:0]
:DOCKER-ISOLATION-STAGE-1 - [0:0]
:DOCKER-ISOLATION-STAGE-2 - [0:0]
:DOCKER-USER - [0:0]
:KUBE-EXTERNAL-SERVICES - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-FORWARD - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-PROXY-CANARY - [0:0]
:KUBE-SERVICES - [0:0]
-A INPUT -m comment --comment "kubernetes health check service ports" -j KUBE-NODEPORTS
-A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES
-A FORWARD -m comment --comment "kubernetes forwarding rules" -j KUBE-FORWARD
-A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES
-A FORWARD -s 10.244.0.0/16 -j ACCEPT
-A FORWARD -d 10.244.0.0/16 -j ACCEPT
-A OUTPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A KUBE-FORWARD -m conntrack --ctstate INVALID -j DROP
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding rules" -m mark --mark 0x4000/0x4000 -j ACCEPT
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding conntrack pod source rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding conntrack pod destination rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
COMMIT
# Completed on Mon Nov 29 01:48:41 2021
# Generated by iptables-save v1.4.21 on Mon Nov 29 01:48:41 2021
*nat
:PREROUTING ACCEPT [11:1467]
:INPUT ACCEPT [7:1131]
:OUTPUT ACCEPT [13:972]
:POSTROUTING ACCEPT [17:1308]
:DOCKER - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-MARK-DROP - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-PROXY-CANARY - [0:0]
:KUBE-SEP-3PZTSR4ZZA3ZWJL4 - [0:0]
:KUBE-SEP-3R5LNPUQQTJJOKX6 - [0:0]
:KUBE-SEP-7NB5DTA3O7RQFKKH - [0:0]
:KUBE-SEP-DMWMUF2F73FTLKQA - [0:0]
:KUBE-SEP-FRUPCETQNK3COUCB - [0:0]
:KUBE-SEP-IHPO6RWTHWYZ7XS4 - [0:0]
:KUBE-SEP-M7DNPQN2M7NMNNSM - [0:0]
:KUBE-SEP-TMTK2FOJZXHCV6VR - [0:0]
:KUBE-SEP-X4LYBJEW7BXMHURF - [0:0]
:KUBE-SEP-XALYB7IDDCLM7EQB - [0:0]
:KUBE-SEP-XPJQPQK5H3SMMI42 - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-SVC-6GWMNMJGA2Q7LLKR - [0:0]
:KUBE-SVC-ERIFXISQEP7F7OF4 - [0:0]
:KUBE-SVC-JD5MR3NA4I4DYORP - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SVC-TCOU7JCQXEZGVUNU - [0:0]
:KUBE-SVC-XCCRWSFD2BRL7CRI - [0:0]
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A POSTROUTING -s 10.244.0.0/16 -d 10.244.0.0/16 -j RETURN
-A POSTROUTING -s 10.244.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE
-A POSTROUTING ! -s 10.244.0.0/16 -d 10.224.1.0/24 -j RETURN
-A POSTROUTING ! -s 10.244.0.0/16 -d 10.244.0.0/16 -j MASQUERADE
-A DOCKER -i docker0 -j RETURN
-A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-POSTROUTING -m mark ! --mark 0x4000/0x4000 -j RETURN
-A KUBE-POSTROUTING -j MARK --set-xmark 0x4000/0x0
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -j MASQUERADE
-A KUBE-SEP-3PZTSR4ZZA3ZWJL4 -s 10.224.0.3/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-3PZTSR4ZZA3ZWJL4 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 10.224.0.3:53
-A KUBE-SEP-3R5LNPUQQTJJOKX6 -s 10.224.0.2/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-3R5LNPUQQTJJOKX6 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 10.224.0.2:53
-A KUBE-SEP-7NB5DTA3O7RQFKKH -s 10.224.2.2/32 -m comment --comment "default/kubia" -j KUBE-MARK-MASQ
-A KUBE-SEP-7NB5DTA3O7RQFKKH -p tcp -m comment --comment "default/kubia" -m tcp -j DNAT --to-destination 10.224.2.2:9011
-A KUBE-SEP-DMWMUF2F73FTLKQA -s 10.224.1.2/32 -m comment --comment "default/kubia" -j KUBE-MARK-MASQ
-A KUBE-SEP-DMWMUF2F73FTLKQA -p tcp -m comment --comment "default/kubia" -m tcp -j DNAT --to-destination 10.224.1.2:9011
-A KUBE-SEP-FRUPCETQNK3COUCB -s 10.224.0.3/32 -m comment --comment "kube-system/kube-dns:metrics" -j KUBE-MARK-MASQ
-A KUBE-SEP-FRUPCETQNK3COUCB -p tcp -m comment --comment "kube-system/kube-dns:metrics" -m tcp -j DNAT --to-destination 10.224.0.3:9153
-A KUBE-SEP-IHPO6RWTHWYZ7XS4 -s 10.224.1.6/32 -m comment --comment "default/kubia-curl" -j KUBE-MARK-MASQ
-A KUBE-SEP-IHPO6RWTHWYZ7XS4 -p tcp -m comment --comment "default/kubia-curl" -m tcp -j DNAT --to-destination 10.224.1.6:80
-A KUBE-SEP-M7DNPQN2M7NMNNSM -s 10.224.0.3/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-M7DNPQN2M7NMNNSM -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 10.224.0.3:53
-A KUBE-SEP-TMTK2FOJZXHCV6VR -s 10.224.0.2/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-TMTK2FOJZXHCV6VR -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 10.224.0.2:53
-A KUBE-SEP-X4LYBJEW7BXMHURF -s 192.168.3.75/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ
-A KUBE-SEP-X4LYBJEW7BXMHURF -p tcp -m comment --comment "default/kubernetes:https" -m tcp -j DNAT --to-destination 192.168.3.75:6443
-A KUBE-SEP-XALYB7IDDCLM7EQB -s 10.224.2.3/32 -m comment --comment "default/kubia" -j KUBE-MARK-MASQ
-A KUBE-SEP-XALYB7IDDCLM7EQB -p tcp -m comment --comment "default/kubia" -m tcp -j DNAT --to-destination 10.224.2.3:9011
-A KUBE-SEP-XPJQPQK5H3SMMI42 -s 10.224.0.2/32 -m comment --comment "kube-system/kube-dns:metrics" -j KUBE-MARK-MASQ
-A KUBE-SEP-XPJQPQK5H3SMMI42 -p tcp -m comment --comment "kube-system/kube-dns:metrics" -m tcp -j DNAT --to-destination 10.224.0.2:9153
-A KUBE-SERVICES ! -s 10.224.0.0/16 -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES ! -s 10.224.0.0/16 -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES ! -s 10.224.0.0/16 -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES ! -s 10.224.0.0/16 -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-SVC-JD5MR3NA4I4DYORP
-A KUBE-SERVICES ! -s 10.224.0.0/16 -d 10.97.153.166/32 -p tcp -m comment --comment "default/kubia cluster IP" -m tcp --dport 80 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.97.153.166/32 -p tcp -m comment --comment "default/kubia cluster IP" -m tcp --dport 80 -j KUBE-SVC-XCCRWSFD2BRL7CRI
-A KUBE-SERVICES ! -s 10.224.0.0/16 -d 10.105.118.157/32 -p tcp -m comment --comment "default/kubia-curl cluster IP" -m tcp --dport 80 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.105.118.157/32 -p tcp -m comment --comment "default/kubia-curl cluster IP" -m tcp --dport 80 -j KUBE-SVC-6GWMNMJGA2Q7LLKR
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-6GWMNMJGA2Q7LLKR -m comment --comment "default/kubia-curl" -j KUBE-SEP-IHPO6RWTHWYZ7XS4
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-3R5LNPUQQTJJOKX6
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-SEP-3PZTSR4ZZA3ZWJL4
-A KUBE-SVC-JD5MR3NA4I4DYORP -m comment --comment "kube-system/kube-dns:metrics" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-XPJQPQK5H3SMMI42
-A KUBE-SVC-JD5MR3NA4I4DYORP -m comment --comment "kube-system/kube-dns:metrics" -j KUBE-SEP-FRUPCETQNK3COUCB
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -j KUBE-SEP-X4LYBJEW7BXMHURF
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-TMTK2FOJZXHCV6VR
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns" -j KUBE-SEP-M7DNPQN2M7NMNNSM
-A KUBE-SVC-XCCRWSFD2BRL7CRI -m comment --comment "default/kubia" -m statistic --mode random --probability 0.33333333349 -j KUBE-SEP-DMWMUF2F73FTLKQA
-A KUBE-SVC-XCCRWSFD2BRL7CRI -m comment --comment "default/kubia" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-7NB5DTA3O7RQFKKH
-A KUBE-SVC-XCCRWSFD2BRL7CRI -m comment --comment "default/kubia" -j KUBE-SEP-XALYB7IDDCLM7EQB
COMMIT
# Completed on Mon Nov 29 01:48:41 2021

192.168.3.75's route:

10.224.0.0/24 dev cni0 proto kernel scope link src 10.224.0.1
10.224.1.0/24 via 10.224.1.0 dev flannel.1 onlink
10.224.2.0/24 via 10.224.2.0 dev flannel.1 onlink
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1
192.168.3.0/24 dev ens33 proto kernel scope link src 192.168.3.75 metric 100

192.168.3.76's route:

10.224.0.0/24 via 10.224.0.0 dev flannel.1 onlink
10.224.1.0/24 dev cni0 proto kernel scope link src 10.224.1.1
10.224.2.0/24 via 10.224.2.0 dev flannel.1 onlink
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1
192.168.3.0/24 dev ens33 proto kernel scope link src 192.168.3.76 metric 100

192.168.3.77's route:

10.224.0.0/24 via 10.224.0.0 dev flannel.1 onlink
10.224.1.0/24 via 10.224.1.0 dev flannel.1 onlink
10.224.2.0/24 dev cni0 proto kernel scope link src 10.224.2.1
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1
192.168.3.0/24 dev ens33 proto kernel scope link src 192.168.3.77 metric 100

My Environment

flannel.yaml: https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

...
net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
...
  • Etcd version: etcd:3.4.13-0
  • Kubernetes version (if used): v1.21.1
  • Operating System and version: Centos7 3.10.0-957.el7.x86_64
-- kamikyo
flannel
kubernetes

0 Answers