regarding to below logs which I used describe pod, my pods stuck in pending state due to “FailedCreatePodSandBox”
there is some key note: -I use calico as CNI. -this log repeat multple time, I just past here this one as sample. -the ip 192.168.90.152 belong to ingress and 129 belong to tiller in the munitoring NS of k8s and I do not know why k8s try to bind it to another pod.
I google this issue and got nothing here I’m.
Warning FailedCreatePodSandBox 2m56s kubelet, worker-dev Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "2abca59b55efb476723ec9c4402ede6e3a6ee9aed67ecd19c3ef5c7719ae51f1" network for pod "service-stg-8d9d68475-2h4b8": NetworkPlugin cni failed to set up pod "service-stg-8d9d68475-2h4b8_stg" network: error adding host side routes for interface: cali670b0a20d66, error: route (Ifindex: 10688, Dst: 192.168.90.152/32, Scope: 253) already exists for an interface other than 'cali670b0a20d66'
Warning FailedCreatePodSandBox 2m53s kubelet, worker-dev Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "ec155fd442c0ea09b282a13c4399ae25b97d5c3786f90f1a045449b52ced4cb7" network for pod "service-stg-8d9d68475-2h4b8": NetworkPlugin cni failed to set up pod "service-stg-8d9d68475-2h4b8_stg" network: error adding host side routes for interface: cali670b0a20d66, error: route (Ifindex: 10691, Dst: 192.168.90.129/32, Scope: 253) already exists for an interface other than 'cali670b0a20d66'
can any one help on this issue?
As per design of CNI network plugins and according to Kubernetes network model, Calico defines special IP pool CIDR CALICO_IPV4POOL_CIDR
to determine what IP ranges are valid to use for allocating pod IP addresses across k8s cluster.
When you spin up a new Pod on a particular K8s node Calico plugin will do the following:
You can fetch the data about Calico virtual interfaces on the relevant node, i.e.:
$ ip link | grep cali
cali80d3ff89956@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP mode DEFAULT group default
calie58f9d521fb@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP mode DEFAULT group default
In order to estimate the current issue , you can consider to query Calico install-cni
Pod's container logs and retrieve the data about the certain Pod service-stg-8d9d68475-2h4b8
, searching for the existing virtual interface mapping:
kubectl logs $(kubectl get po -l k8s-app=calico-node -o jsonpath='{.items[0].metadata.name}' -n kube-system) -c calico-node -n kube-system| grep service-stg-8d9d68475-2h4b8_stg