Kubernetes can't access pod in multi worker nodes

7/5/2020

I was following a tutorial on youtube and the guy said that if you deploy your application in a multi-cluster setup and if your service is of type NodePort, you don't have to worry from where your pod gets scheduled. You can access it with different node IP address like

worker1IP:servicePort or worker2IP:servicePort or workerNIP:servicePort

But I tried just now and this is not the case, I can only access the pod on the node from where it is scheduled and deployed. Is it correct behavior?

kubectl version --short 
> Client Version: v1.18.5 
> Server Version: v1.18.5

kubectl get pods -n kube-system

NAME                                    READY   STATUS             RESTARTS   AGE
coredns-66bff467f8-6pt8s                0/1     Running            288        7d22h
coredns-66bff467f8-t26x4                0/1     Running            288        7d22h
etcd-redhat-master                      1/1     Running            16         7d22h
kube-apiserver-redhat-master            1/1     Running            17         7d22h
kube-controller-manager-redhat-master   1/1     Running            19         7d22h
kube-flannel-ds-amd64-9mh6k             1/1     Running            16         5d22h
kube-flannel-ds-amd64-g2k5c             1/1     Running            16         5d22h
kube-flannel-ds-amd64-rnvgb             1/1     Running            14         5d22h
kube-proxy-gf8zk                        1/1     Running            16         7d22h
kube-proxy-wt7cp                        1/1     Running            9          7d22h
kube-proxy-zbw4b                        1/1     Running            9          7d22h
kube-scheduler-redhat-master            1/1     Running            18         7d22h
weave-net-6jjd8                         2/2     Running            34         7d22h
weave-net-ssqbz                         1/2     CrashLoopBackOff   296        7d22h
weave-net-ts2tj                         2/2     Running            34         7d22h




[root@redhat-master deployments]# kubectl logs weave-net-ssqbz -c weave -n kube-system
DEBU: 2020/07/05 07:28:04.661866 [kube-peers] Checking peer "b6:01:79:66:7d:d3" against list &{[{e6:c9:b2:5f:82:d1 redhat-master} {b2:29:9a:5b:89:e9 redhat-console-1} {e2:95:07:c8:a0:90 redhat-console-2}]}
Peer not in list; removing persisted data
INFO: 2020/07/05 07:28:04.924399 Command line options: map[conn-limit:200 datapath:datapath db-prefix:/weavedb/weave-net docker-api: expect-npc:true host-root:/host http-addr:127.0.0.1:6784 ipalloc-init:consensus=2 ipalloc-range:10.32.0.0/12 metrics-addr:0.0.0.0:6782 name:b6:01:79:66:7d:d3 nickname:redhat-master no-dns:true port:6783]
INFO: 2020/07/05 07:28:04.924448 weave  2.6.5
FATA: 2020/07/05 07:28:04.938587 Existing bridge type "bridge" is different than requested "bridged_fastdp". Please do 'weave reset' and try again

Update: So basically the issue is because iptables is deprecated in rhel8. But After downgrading my OS to rhel7. I can access the nodeport only on the node it is deployed.

-- Ruelos Joel
kubernetes

0 Answers