netcat listerning pod in kubernetes namespace unable to connect

12/12/2020

I am running kubernetes v 19.4 with weave-net ( image: weaveworks/weave-npc:2.7.0) There are no network policies active in the default namespace

I want to run a netcat listener on pod1 port 8080, and want to connect to pod1 port 8080 by pod2

[root@node01 ~]# kubectl run pod1 -i -t --image=ubuntu -- /bin/bash
If you don't see a command prompt, try pressing enter.
root@pod1:/# apt update ; apt install netcat-openbsd -y            
........
root@pod1:/# nc -l -p 8080

I verify the port is listening on pod1 by :

root@node01 ~]# kubectl exec -i -t pod1 -- /bin/bash 
root@pod1:/# apt install net-tools -y 
...........
root@pod1:/# netstat -tulpen  
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       User       Inode      PID/Program name    
tcp        0      0 0.0.0.0:8080            0.0.0.0:*               LISTEN      0          213960     263/nc

root@pod1:/# ifconfig             
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1376
        inet 10.32.0.3  netmask 255.240.0.0  broadcast 10.47.255.255
        ether a2:b9:3e:bc:6e:25  txqueuelen 0  (Ethernet)
        RX packets 8429  bytes 17438639 (17.4 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 4217  bytes 284639 (284.6 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

              

I install pod2 witn netcat on it:

[root@node01 ~]# kubectl run pod2 -i -t --image=ubuntu -- /bin/bash 
If you don't see a command prompt, try pressing enter.
root@pod2:/# apt update ; apt install netcat-openbsd -y 

I test my netcat listener on pod1 from pod2:

root@pod2:/# nc 10.32.0.3 8080
....times out

So i decided to create a service of port 8080 on pod1:

kubectl expose pod pod1 --port=8080 ; kubectl get svc ; kubectl get netpol 

[root@node01 ~]# kubectl expose pod pod1 --port=8080 ; kubectl get svc 
service/pod1 exposed
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
apache       ClusterIP   10.104.218.123   <none>        80/TCP     20d
kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP    21d
nginx        ClusterIP   10.98.221.196    <none>        80/TCP     13d
pod1         ClusterIP   10.105.194.196   <none>        8080/TCP   2s
No resources found in default namespace.

Retry from pod2 now by service:

ping pod1   
PING pod1.default.svc.cluster.local (10.105.194.196) 56(84) bytes of data.

root@pod2:/# nc pod1 8080
....times out

I also tried this with the regular netcat package.

For good measure i try to expose port 8080 on the pod as nodeport:

root@node01 ~]# kubectl delete svc pod1 ; kubectl expose pod pod1 --port=8080 --type=NodePort ; kubectl get svc 

So when i try to access that port from outside kubernetes i am unable to connect, for good measure i also test the ssh port to verify my base connectivity is ok
user@DESKTOP-7TIH9:~$ nc -zv 10.10.70.112 30743
nc: connect to 10.10.70.112 port 30743 (tcp) failed: Connection refused
user@DESKTOP-7TIH9:~$ nc -zv 10.10.70.112 22
Connection to 10.10.70.112 22 port [tcp/ssh] succeeded!

Can anybody tell me if i am doing something, have the wrong expectation or advice me how to proceed. Thank you in advance.

-- Arch
kubernetes
netcat

1 Answer

12/21/2020

Trying to solve this i somehow decided to enable the firewall on the k8s hosts. This lead me to a broken cluster. I decided to reinit the cluster, make sure all the fw ports are opened. Including this one : https://www.weave.works/docs/net/latest/faq#ports

All is working now1

-- Arch
Source: StackOverflow