I have a (working) test program that sends and receives messages across UDP multicast. I've successfully deployed it to kubernetes cluster and demonstrated two pods communicating with one another. The only catch with this is that I need to add hostNetwork: true
to the pod specs. As I understand it, this disables all the network virtualization that would otherwise be available. I've also tried
- containerPort: 12345
hostPort: 12345
protocol: UDP
but when I use that without hostNetwork
communication fails.
Is there a way to get this working whilst still being able to use the normal network for everything else? (We're unlikely to want to switch network layer to something like Weave.)
Using hostNetwork: true
is good when you expect to get direct access from the nested pod to the Node network interface, however it brings some restrictions when you have application hosted on a few Nodes, because every time Kubernetes restarts the Pod, it can be spun on different Node as a result IP address for your application might be changed. Moreover, using hostNetwork
makes some problem with port collisions when you are planning to scale your application within Kubernetes cluster and therefore not recommended to implement when you are bootstrapping Kubernetes cluster on Cloud environments.
If you wouldn't consider using overlay network for Pods communication as a significant part of Cluster Networking model, then you can lose some essential benefits like DNS resolving feature (CoreDNS, Kube-DNS).
I suppose you can try to use NodePort
as a Service object. Due to the fact that NodePort
service proxies target application port on the corresponded Node it might be worth to check if it fits your requirement, however I don't know anything about your application deployment composition and network specification for a more advanced solution.