I've noticed that even failed pods replays to ICMP pings ( pods in not Ready state ). Is there a way to configure CNI ( or Kubernetes ) in the way so failed pods didn't generate ICMP replies ?
#kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
multitool-1 1/1 Running 0 20m 172.17.0.3 minikube <none> <none>
multitool-2 0/1 ImagePullBackOff 0 20m 172.17.0.4 minikube <none> <none>
multitool-3 1/1 Running 0 3m9s 172.17.0.5 minikube <none> <none>
#kubectl exec multitool-3 -it bash
bash-5.0# ping 172.17.0.4
PING 172.17.0.4 (172.17.0.4) 56(84) bytes of data.
64 bytes from 172.17.0.4: icmp_seq=1 ttl=64 time=0.048 ms
64 bytes from 172.17.0.4: icmp_seq=2 ttl=64 time=0.107 ms
^C
--- 172.17.0.4 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1041ms
rtt min/avg/max/mdev = 0.048/0.077/0.107/0.029 ms
bash-5.0#
No, that's not how ICMP works. The kernel handles those, it only checks if the networking interface is operational, which it is regardless of how broken the container process might be.