why could not ping service ip in kubernetes cluster when using calico

7/11/2020

I am using calico as my kubernetes CNI plugin, but when I ping service from kubernetes pod, it failed.First I find the service ip:

    [root@localhost ~]# kubectl get svc -o wide
    NAME                                       TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                                 AGE     SELECTOR
prometheus-1594471894-kube-state-metrics   ClusterIP      10.20.39.193    <none>        8080/TCP                                3h16m   app.kubernetes.io/instance=prometheus-1594471894,app.kubernetes.io/name=kube-state-metrics

then ping this ip from any pods(already login into pods):

root@k8sslave1:/# ping 10.20.39.193
PING 10.20.39.193 (10.20.39.193) 56(84) bytes of data.

no response. and then using traceroute to check the path:

root@k8sslave1:/# traceroute 10.20.39.193
traceroute to 10.20.39.193 (10.20.39.193), 64 hops max
  1   192.168.31.1  0.522ms  0.539ms  0.570ms 
  2   192.168.1.1  1.171ms  0.877ms  0.920ms 
  3   100.81.0.1  3.918ms  3.917ms  3.602ms 
  4   117.135.40.145  4.768ms  4.337ms  4.232ms 
  5   *  *  * 
  6   *  *  * 

the package was route to internet, not forward to kubernetes service.Why would this happen? what should I do to fix it? the pod could access internet, and could successfully ping other pods ip.

root@k8sslave1:/# ping 10.11.157.67
PING 10.11.157.67 (10.11.157.67) 56(84) bytes of data.
64 bytes from 10.11.157.67: icmp_seq=1 ttl=64 time=0.163 ms
64 bytes from 10.11.157.67: icmp_seq=2 ttl=64 time=0.048 ms
64 bytes from 10.11.157.67: icmp_seq=3 ttl=64 time=0.036 ms
64 bytes from 10.11.157.67: icmp_seq=4 ttl=64 time=0.102 ms

this is my ip config when install the kubernetes cluster:

kubeadm init \
--apiserver-advertise-address 0.0.0.0 \
--apiserver-bind-port 6443 \
--cert-dir /etc/kubernetes/pki \
--control-plane-endpoint 192.168.31.29 \
--image-repository registry.cn-hangzhou.aliyuncs.com/google_containers \
--kubernetes-version 1.18.2 \
--pod-network-cidr 10.11.0.0/16 \
--service-cidr 10.20.0.0/16 \
--service-dns-domain cluster.local \
--upload-certs \
--v=6

this is the dns resolv.conf:

cat /etc/resolv.conf 
nameserver 10.20.0.10
search default.svc.cluster.local svc.cluster.local cluster.local
options ndots:5

this is the pod's kernel route table:

[root@localhost ~]# kubectl exec -it shell-demo /bin/bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead.
root@k8sslave1:/# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.31.1    0.0.0.0         UG    100    0        0 enp1s0
10.11.102.128   192.168.31.29   255.255.255.192 UG    0      0        0 tunl0
10.11.125.128   192.168.31.31   255.255.255.192 UG    0      0        0 tunl0
10.11.157.64    0.0.0.0         255.255.255.192 U     0      0        0 *
10.11.157.66    0.0.0.0         255.255.255.255 UH    0      0        0 cali4ac004513e1
10.11.157.67    0.0.0.0         255.255.255.255 UH    0      0        0 cali801b80f5d85
10.11.157.68    0.0.0.0         255.255.255.255 UH    0      0        0 caliaa7c2766183
10.11.157.69    0.0.0.0         255.255.255.255 UH    0      0        0 cali83957ce33d2
10.11.157.71    0.0.0.0         255.255.255.255 UH    0      0        0 calia012ca8e3b0
10.11.157.72    0.0.0.0         255.255.255.255 UH    0      0        0 cali3e6b175ded9
10.11.157.73    0.0.0.0         255.255.255.255 UH    0      0        0 calif042b3edac7
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
192.168.31.0    0.0.0.0         255.255.255.0   U     100    0        0 enp1s0
192.168.122.0   0.0.0.0         255.255.255.0   U     0      0        0 virbr0
-- Dolphin
kubernetes

1 Answer

7/11/2020

This is a very common issue and it required from me a full migration of CIDR IPs.

Most probably, this issue about the overlap of CIDRs between Pods CIDR ( which is IP pool used to assign IPs for services and pods) and CIDR of your network.

in this case, route tables of each node (VM) will ensure that:

sudo route -n

Because you didn't provide enough logs, i will help you here how to troubleshoot the issue . If you get the same issue that I guessed, you will need to change the CIDR range of pods as explained starting from Step3.

Step1 : Install calicoctl as a Kubernetes pod

 kubectl apply -f https://docs.projectcalico.org/manifests/calicoctl.yaml

 alias calicoctl="kubectl exec -i -n kube-system calicoctl -- /calicoctl"

Step2 : Check the status of the Calico instance.

calicoctl node status


# Sample of output ###################
Calico process is running.

IPv4 BGP status
+--------------+-------------------+-------+----------+-------------+
| PEER ADDRESS |     PEER TYPE     | STATE |  SINCE   |    INFO     |
+--------------+-------------------+-------+----------+-------------+
| 172.17.8.102 | node-to-node mesh | up    | 23:30:04 | Established |
+--------------+-------------------+-------+----------+-------------+

If you have issue in this step, stop here and fix it.

Otherwise, you can proceed.

Step3: List existing Pools

calicoctl get ippool -o wide

**Step4: Create new Pool**

make sure it does not overlap with your network CIDR.

calicoctl create -f -<<EOF
apiVersion: projectcalico.org/v3
kind: IPPool
metadata:
  name: pool-c
spec:
  cidr: 10.244.0.0/16
  ipipMode: Always
  natOutgoing: true
EOF

The new pool is named pool-c.

Step5: Delete the current pool:

# get all pools
calicoctl get ippool -o yaml > pools.yaml

# edit the file pools.yaml and remove the current pool.
# file editing ... save & quit
# then apply  changes
calicoctl apply -f -<<EOF
 # Here, Must be the new content of the file pools.yaml 

EOF

Step6: Check SDN IPs assigned to each workload(pod):

calicoctl get wep --all-namespaces

Keep restarting old pods, recreating old services until make sure that all resources are assigned IPs from the new pool.

-- Abdennour TOUMI
Source: StackOverflow