I'm new to k8s, but I know that, as a k8s requirement, every Pod should be reachable from any other Pod. However, this is not happening in my setup: I can't ping from within a Pod another Pod in another Node.
Here is my setup:
I have one master node (sauron
), and three workers (gothmog
, angmar
, khamul
). I have installed the weave
network via:
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
Here's the output of kubectl get pods -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-5644d7b6d9-bd5qn 1/1 Running 1 59d 10.38.0.2 angmar <none> <none>
etcd-sauron 1/1 Running 44 145d 192.168.201.207 sauron <none> <none>
kube-apiserver-sauron 1/1 Running 82 145d 192.168.201.207 sauron <none> <none>
kube-controller-manager-sauron 1/1 Running 393 145d 192.168.201.207 sauron <none> <none>
kube-proxy-p97vw 1/1 Running 1 134d 192.168.202.235 angmar <none> <none>
kube-proxy-pxpjm 1/1 Running 5 141d 192.168.201.209 gothmog <none> <none>
kube-proxy-rfvcv 1/1 Running 8 145d 192.168.201.207 sauron <none> <none>
kube-proxy-w6p74 1/1 Running 2 141d 192.168.201.213 khamul <none> <none>
kube-scheduler-sauron 1/1 Running 371 145d 192.168.201.207 sauron <none> <none>
weave-net-9sk7r 2/2 Running 0 16h 192.168.202.235 angmar <none> <none>
weave-net-khl69 2/2 Running 0 16h 192.168.201.207 sauron <none> <none>
weave-net-rsntg 2/2 Running 0 16h 192.168.201.213 khamul <none> <none>
weave-net-xk2w4 2/2 Running 0 16h 192.168.201.209 gothmog <none> <none>
Here's my deployment yaml file content:
kind: Deployment
metadata:
name: my-deployment
spec:
replicas: 3
selector:
matchLabels:
app: my-deployment
template:
metadata:
labels:
app: my-deployment
spec:
containers:
- name: my-image
image: my-image:latest
command: ["/bin/bash", "-c", "/opt/tools/bin/myapp"]
imagePullPolicy: IfNotPresent
ports:
- containerPort: 15113
volumeMounts:
- mountPath: /tmp
name: tempdir
imagePullSecrets:
- name: registrypullsecret
volumes:
- name: tempdir
emptyDir: {}
After applying the deployment via kubectl apply -f mydeployment.yaml
, I verified that the pods started. But just can't ping anything outside their own internal (pod) IP address.
# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
my-deployment-77bbb7579c-4cnsk 1/1 Running 0 110s 10.38.0.0 angmar <none> <none>
my-deployment-77bbb7579c-llm2x 1/1 Running 0 110s 10.44.0.2 khamul <none> <none>
my-deployment-77bbb7579c-wbbmv 1/1 Running 0 110s 10.32.0.2 gothmog <none> <none>
As if not being able to ping wasn't enough, the pod my-deployment-77bbb7579c-4cnsk
running in angmar
has an IP 10.38.0.0
, which I find too odd... why is it like this?
Also, each of the containers has an /etc/resolv.conf
with nameserver 10.96.0.10
in it, which is not reachable either from within any of the containers/pods.
What should I do to be able to ping 10.44.0.2 (the pod running in khamul
) from, let's say, the pod in gothmog
(10.32.0.2)?
Update 1:
# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
angmar Ready <none> 134d v1.16.3 192.168.202.235 <none> CentOS Linux 7 (Core) 3.10.0-957.10.1.el7.x86_64 docker://1.13.1
gothmog Ready <none> 142d v1.16.2 192.168.201.209 <none> CentOS Linux 7 (Core) 3.10.0-957.10.1.el7.x86_64 docker://1.13.1
khamul Ready <none> 142d v1.16.2 192.168.201.213 <none> CentOS Linux 7 (Core) 3.10.0-957.10.1.el7.x86_64 docker://1.13.1
sauron Ready master 146d v1.16.2 192.168.201.207 <none> CentOS Linux 7 (Core) 3.10.0-957.10.1.el7.x86_64 docker://1.13.1
Some for the errors output of the weave pod at each node are: sauron
(master):
INFO: 2020/04/08 21:52:31.042120 ->[192.168.202.235:6783|fe:da:ea:36:b0:ea(angmar)]: connection shutting down due to error: IP allocation was seeded by different peers (received: [22:eb:02:7c:
57:6a(gothmog) e2:f6:ed:71:63:cb(khamul)], ours: [fe:5a:2a:52:86:22(sauron)])
INFO: 2020/04/08 21:52:33.675287 ->[192.168.201.209:6783] error during connection attempt: dial tcp :0->192.168.201.209:6783: connect: connection refused
INFO: 2020/04/08 21:52:34.992875 Error checking version: Get https://checkpoint-api.weave.works/v1/check/weave-net?arch=amd64&flag_docker-version=none&flag_kernel-version=3.10.0-957.10.1.el7.x
86_64&flag_kubernetes-cluster-size=3&flag_kubernetes-cluster-uid=428158f7-f097-4627-9dc0-56f5d77a1b3e&flag_kubernetes-version=v1.16.3&flag_network=fastdp&os=linux&signature=TQKdZQISNAlRStpfj1W
vj%2BHWIBhqTt9XQ2czf6xSYNA%3D&version=2.6.2: dial tcp: i/o timeout
INFO: 2020/04/08 21:52:49.640011 ->[192.168.201.209:6783] error during connection attempt: dial tcp :0->192.168.201.209:6783: connect: connection refused
INFO: 2020/04/08 21:52:53.202321 ->[192.168.202.235:6783|fe:da:ea:36:b0:ea(angmar)]: connection shutting down due to error: IP allocation was seeded by different peers (received: [22:eb:02:7c:
57:6a(gothmog) e2:f6:ed:71:63:cb(khamul)], ours: [fe:5a:2a:52:86:22(sauron)])
khamul
(worker):
INFO: 2020/04/09 08:05:52.101683 ->[192.168.201.209:49220|22:eb:02:7c:57:6a(gothmog)]: connection shutting down due to error: IP allocation was seeded by different peers (received: [[663/1858]c:57:6a(gothmog) e2:f6:ed:71:63:cb(khamul)], ours: [fe:5a:2a:52:86:22(sauron)])
INFO: 2020/04/09 08:06:46.642090 ->[192.168.201.209:6783|22:eb:02:7c:57:6a(gothmog)]: connection shutting down due to error: no working forwarders to 22:eb:02:7c:57:6a(gothmog)
INFO: 2020/04/09 08:08:40.131015 ->[192.168.202.235:6783|fe:da:ea:36:b0:ea(angmar)]: connection shutting down due to error: IP allocation was seeded by different peers (received: [22:eb:02:7c:
57:6a(gothmog) e2:f6:ed:71:63:cb(khamul)], ours: [fe:5a:2a:52:86:22(sauron)])
INFO: 2020/04/09 08:09:39.378853 Error checking version: Get https://checkpoint-api.weave.works/v1/check/weave-net?arch=amd64&flag_docker-version=none&flag_kernel-version=3.10.0-957.10.1.el7.x
86_64&flag_kubernetes-cluster-size=3&flag_kubernetes-cluster-uid=428158f7-f097-4627-9dc0-56f5d77a1b3e&flag_kubernetes-version=v1.16.3&flag_network=fastdp&os=linux&signature=Oarh7uve3VP8qo%2BlV
R6lukCi40hprasXxlwmmBYd5eI%3D&version=2.6.2: dial tcp: i/o timeout
INFO: 2020/04/09 08:09:48.873936 ->[192.168.201.209:6783|22:eb:02:7c:57:6a(gothmog)]: connection shutting down due to error: IP allocation was seeded by different peers (received: [22:eb:02:7c
:57:6a(gothmog) e2:f6:ed:71:63:cb(khamul)], ours: [fe:5a:2a:52:86:22(sauron)])
INFO: 2020/04/09 08:11:18.666790 ->[192.168.201.209:45456|22:eb:02:7c:57:6a(gothmog)]: connection shutting down due to error: IP allocation was seeded by different peers (received: [22:eb:02:7
c:57:6a(gothmog) e2:f6:ed:71:63:cb(khamul)], ours: [fe:5a:2a:52:86:22(sauron)])
gothmog
(worker):
INFO: 2020/04/09 16:50:08.818956 ->[192.168.201.207:6783|fe:5a:2a:52:86:22(sauron)]: connection shutting down due to error: IP allocation was seeded by different peers (received: [fe:5a:2a:52:86:22(sauron)], ours: [22:eb:02:7c:57:6a(gothmog) e2:f6:ed:71:63:cb(khamul)])
INFO: 2020/04/09 16:52:52.751021 ->[192.168.201.213:54822|e2:f6:ed:71:63:cb(khamul)]: connection shutting down due to error: IP allocation was seeded by different peers (received: [fe:5a:2a:52:86:22(sauron)], ours: [22:eb:02:7c:57:6a(gothmog) e2:f6:ed:71:63:cb(khamul)])
INFO: 2020/04/09 16:53:18.934143 ->[192.168.201.207:34423|fe:5a:2a:52:86:22(sauron)]: connection shutting down due to error: no working forwarders to fe:5a:2a:52:86:22(sauron)
INFO: 2020/04/09 16:53:49.773876 ->[192.168.201.213:6783|e2:f6:ed:71:63:cb(khamul)]: connection shutting down due to error: IP allocation was seeded by different peers (received: [fe:5a:2a:52:86:22(sauron)], ours: [22:eb:02:7c:57:6a(gothmog) e2:f6:ed:71:63:cb(khamul)])
INFO: 2020/04/09 16:53:57.784587 ->[192.168.201.207:6783|fe:5a:2a:52:86:22(sauron)]: connection shutting down due to error: IP allocation was seeded by different peers (received: [fe:5a:2a:52:86:22(sauron)], ours: [22:eb:02:7c:57:6a(gothmog) e2:f6:ed:71:63:cb(khamul)])
angmar
(worker):
INFO: 2020/04/09 16:01:46.081118 ->[192.168.201.207:51620|fe:5a:2a:52:86:22(sauron)]: connection shutting down due to error: IP allocation was seeded by different peers (received: [fe:5a:2a:52
:86:22(sauron)], ours: [22:eb:02:7c:57:6a(gothmog) e2:f6:ed:71:63:cb(khamul)])
INFO: 2020/04/09 16:01:50.166722 ->[192.168.201.207:6783|fe:5a:2a:52:86:22(sauron)]: connection shutting down due to error: IP allocation was seeded by different peers (received: [fe:5a:2a:52:
86:22(sauron)], ours: [22:eb:02:7c:57:6a(gothmog) e2:f6:ed:71:63:cb(khamul)])
INFO: 2020/04/09 16:06:48.277791 ->[192.168.201.213:34950|e2:f6:ed:71:63:cb(khamul)]: connection shutting down due to error: read tcp 192.168.202.235:6783->192.168.201.213:34950: read: connect
ion reset by peer
INFO: 2020/04/09 16:07:13.270137 ->[192.168.201.207:58071|fe:5a:2a:52:86:22(sauron)]: connection shutting down due to error: IP allocation was seeded by different peers (received: [fe:5a:2a:52
:86:22(sauron)], ours: [22:eb:02:7c:57:6a(gothmog) e2:f6:ed:71:63:cb(khamul)])
Update 2: All of my-deployment pods (independently of where they are running) contain this exact same /etc/resolv.conf
file:
nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local testnet.ssd.com
options ndots:5
Thank you!
Solved the issue by entering each worker node and doing the following:
rm /var/lib/weave/weave-netdata.db
reboot
Explanation:
My weave log files showed the excerpt:
INFO: 2020/04/08 21:52:31.042120->[192.168.202.235:6783|fe:da:ea:36:b0:ea(angmar)]: connection shutting down due to error: IP allocation was seeded by different peers (received [22:eb:02:7c:57:6a(gothmog) e2:f6:ed:71:63:cb(khamul)], ours: [fe:5a:2a:52:86:22(sauron)])
The weave log output above is obtained by doing the following
kubectl logs -n kube-system <a-weave-pod-id> weave | grep -i error
For reference, see here.
Thanks to everyone that chimed in, and especial thanks to @kitt for providing the answer.