ipvsadm not showing any entry in kubeadm cluster

7/14/2018

I have installed kubeadm and created service and pod:

packet@test:~$ kubectl get pod
NAME                                   READY     STATUS    RESTARTS   AGE
udp-server-deployment-6f87f5c9-466ft   1/1       Running   0          5m
udp-server-deployment-6f87f5c9-5j9rt   1/1       Running   0          5m
udp-server-deployment-6f87f5c9-g9wrr   1/1       Running   0          5m
udp-server-deployment-6f87f5c9-ntbkc   1/1       Running   0          5m
udp-server-deployment-6f87f5c9-xlbjq   1/1       Running   0          5m    

packet@test:~$ kubectl get service
NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)           AGE
kubernetes           ClusterIP   10.96.0.1       <none>        443/TCP           1h
udp-server-service   NodePort    10.102.67.0     <none>        10001:30001/UDP   6m

but still I am not able to access udp-server pod:

packet@test:~$ curl http://192.168.43.161:30001
curl: (7) Failed to connect to 192.168.43.161 port 30001: Connection refused 

while debugging i could see kube-proxy is running but there is no entry in IPVS:

root@test:~# ps auxw | grep kube-proxy
root      4050  0.5  0.7  44340 29952 ?        Ssl  14:33   0:25 /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf
root      6094  0.0  0.0  14224   968 pts/1    S+   15:48   0:00 grep --color=auto kube-proxy

root@test:~# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn

Seems to be there is no entry in ipvsadm causing connection time out.

Regards, Ranjith

-- Ranjith Koova
kubeadm
kubectl
kubernetes
sockets
udp

3 Answers

7/14/2018

Since curl use tcp connection, while 30001 is a udp port, they don't work together, try a udp probe tool, like nmap.

-- Kun Li
Source: StackOverflow

7/14/2018

From this issue (putting aside the load balancer part),

Both externalIPs and status.loadBalancer.ingress[].ip seem to be ignored by kube-proxy in IPVS mode, so external traffic is completely unrouteable.

In contrast, kube-proxy in iptables mode creates DNAT/SNAT rules for external and loadbalancer IPs.

So check if adding a network plugin (flannel, Calico, ...) would improve the situation.

Or check out cloudnativelabs/kube-router, which is also ipvs-based.

A lean yet powerful alternative to several network components used in typical Kubernetes clusters.
All this from a single DaemonSet/Binary. It doesn't get any easier.

-- VonC
Source: StackOverflow

7/17/2018

initially I have created VM(Linux VM) using virtual box(running on window),where I found this type of issue.

Now i have created VM(Linux VM) using virtual manager(running on Linux),in this set up there is no issue and every thing works fine.

It would be great if any one tell is there any restriction from virtual box?

-- Ranjith Koova
Source: StackOverflow