Pod cannot get IP in PodCIDR, get the docker ip

4/26/2017

Kubernetes version (use kubectl version):

Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.1", GitCommit:"b0b7a323cc5a4a2019b2e9520c21c7830b7f708e", GitTreeState:"clean", BuildDate:"2017-04-03T20:44:38Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.1", GitCommit:"b0b7a323cc5a4a2019b2e9520c21c7830b7f708e", GitTreeState:"clean", BuildDate:"2017-04-03T20:33:27Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}

Environment: - Cloud provider or hardware configuration: - OS (e.g. from /etc/os-release): CentOS 7.2 - Kernel (e.g. uname -a): 4.9.0 - Install tools: bare metal

What happened: Add new node to exist cluster, and schedule the pod to new node. The IP of pod is not in range of PodCIDR, and use the docker bridge IP

dl.240.172.hadoop.sjz   Ready,master   18d       v1.6.1
dl.245.0.hadoop.sjz     Ready          36m       v1.6.1
dl.245.1.hadoop.sjz     Ready          36m       v1.6.1
dl.245.11.hadoop.sjz    Ready          28m       v1.6.1
dl.245.12.hadoop.sjz    Ready          28m       v1.6.1
dl.245.13.hadoop.sjz    Ready          28m       v1.6.1
dl.245.14.hadoop.sjz    Ready          28m       v1.6.1
dl.245.15.hadoop.sjz    Ready          28m       v1.6.1
dl.245.16.hadoop.sjz    Ready          28m       v1.6.1
dl.245.17.hadoop.sjz    Ready          28m       v1.6.1
dl.245.18.hadoop.sjz    Ready          28m       v1.6.1
dl.245.19.hadoop.sjz    Ready          28m       v1.6.1
dl.245.2.hadoop.sjz     Ready          36m       v1.6.1
dl.245.3.hadoop.sjz     Ready          36m       v1.6.1
dl.245.5.hadoop.sjz     Ready          18d       v1.6.1
dl.245.6.hadoop.sjz     Ready          18d       v1.6.1
dl.245.7.hadoop.sjz     Ready          18d       v1.6.1
dl.245.8.hadoop.sjz     Ready          18d       v1.6.1
dl.245.9.hadoop.sjz     Ready          18d       v1.6.1
l22-240-170             Ready          2h        v1.6.1
l22-240-171             Ready,master   18d       v1.6.1

the node follow is new:

dl.245.0.hadoop.sjz     Ready          36m       v1.6.1
dl.245.1.hadoop.sjz     Ready          36m       v1.6.1
dl.245.11.hadoop.sjz    Ready          28m       v1.6.1
dl.245.12.hadoop.sjz    Ready          28m       v1.6.1
dl.245.13.hadoop.sjz    Ready          28m       v1.6.1
dl.245.14.hadoop.sjz    Ready          28m       v1.6.1
dl.245.15.hadoop.sjz    Ready          28m       v1.6.1
dl.245.16.hadoop.sjz    Ready          28m       v1.6.1
dl.245.17.hadoop.sjz    Ready          28m       v1.6.1
dl.245.18.hadoop.sjz    Ready          28m       v1.6.1
dl.245.19.hadoop.sjz    Ready          28m       v1.6.1
dl.245.2.hadoop.sjz     Ready          36m       v1.6.1
dl.245.3.hadoop.sjz     Ready          36m       v1.6.1

and the pod

auto-discovery-4253124847-h1ln1   1/1       Running   0          2h        10.244.124.200   l22-240-171
busybox                           1/1       Running   3          15d       10.244.71.2      dl.245.6.hadoop.sjz
gpu-test                          1/1       Running   4          15d       10.244.71.7      dl.245.6.hadoop.sjz
gpu-test1                         1/1       Running   3          13d       10.244.71.8      dl.245.6.hadoop.sjz
gpu-test1-1-2                     1/1       Running   3          10d       10.244.203.12    dl.245.8.hadoop.sjz
gpu-test1-1-3                     1/1       Running   2          9d        10.244.71.16     dl.245.6.hadoop.sjz
gpu-test12                        1/1       Running   3          11d       10.244.239.73    dl.245.9.hadoop.sjz
nginx-2970154533-wwb3l            1/1       Running   0          17m       172.17.0.2       dl.245.19.hadoop.sjz
test-ssh-1-lxfgq                  1/1       Running   0          9d        10.244.33.18     dl.245.5.hadoop.sjz
wanglinhong-mount-7jtv1           1/1       Running   0          2d        10.244.71.24     dl.245.6.hadoop.sjz
wanglinhong-test-wrvxv            1/1       Running   0          4d        10.244.239.94    dl.245.9.hadoop.sjz
wanglinhong-test3-8bnr1           1/1       Running   0          33m       172.17.0.2       dl.245.2.hadoop.sjz
wanglinhong-web-z37nb             1/1       Running   0          1d        10.244.203.31    dl.245.8.hadoop.sjz

the pod wanglinhong-test3-8bnr1 and nginx-2970154533-wwb3l has bean schedule to new nodes . and the ip is 172.17.0.x

kubelet at new node all have the log as follow

Apr 26 11:35:04 dl.245.11.hadoop.sjz kubelet[25363]: I0426 11:35:04.182974   25363 kuberuntime_manager.go:902] updating runtime config through cri with podcidr 10.244.48.0/24

Apr 26 11:35:04 dl.245.11.hadoop.sjz kubelet[25363]: I0426 11:35:04.183206   25363 docker_service.go:277] docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.48.0/24,},}

Apr 26 11:35:04 dl.245.11.hadoop.sjz kubelet[25363]: I0426 11:35:04.183445   25363 kubelet_network.go:326] Setting Pod CIDR:  -> 10.244.x.0/24

Anyone who can help me? I try to solve cost two days. THX very much!

-- sope
calico
docker
kubernetes

1 Answer

4/27/2017

This problem has been solved, and I close this issue. The problem is I comment the cni config flag in kubelet.service at the nodes which I add to cluster at a later

-- sope
Source: StackOverflow