I deployed simple udp server example but can see incoming traffic only 1 of 5 pods (with kubectl logs 'udp-server-deployment-XXX' ). I tried it on Azure aks-engine , why it is never load balancing ?
$ kubectl get pods | grep udp-server
udp-server-deployment-6f87f5c9-4mhpm 1/1 Running 0 4m
udp-server-deployment-6f87f5c9-5lqkm 1/1 Running 0 4m
udp-server-deployment-6f87f5c9-5x92x 1/1 Running 0 4m
udp-server-deployment-6f87f5c9-smb8g 1/1 Running 0 4m
udp-server-deployment-6f87f5c9-tszgs 1/1 Running 0 4m
It doesn't help if load balance on public IP neither internal (service.beta.kubernetes.io/azure-load-balancer-internal: "true") Try yourself, if you have 'loggen' (linux syslog generator) handy :
git clone https://github.com/jpoon/kubernetes-udp.git
cd kubernetes-udp
kubectl create -f server.yaml
loggen --inet --dgram --size 300 --rate 10 --interval 10 <IPAddress of udp-server-service > 10001
kubectl logs udp-server-deployment-6f87f5c9-xxx (5 times)
I can only confirm from my side, that I share the same observations as you. LoadBalancing does not kick in when using UDP protocol, no matter of underlying service port type (ClusterIP, NodePort, or LoadBalancer), I checked it on both Azure and GCP.
I think it`s all about the fact that:
A UDP server can’t act as health probe in itself, because UDP doesn’t have acknowledgements that the load balancer could check so an additional component is needed
as mentioned in this blog post.