So I have a deployment which creates N number of worker pods, and a service which internally balanced traffic to them. I access the service from a VM. The VM makes a request, and the pod responds with its hostname so that the VM can make a direct connection to it (this is used for pulling the results back from the pod that actually did the work).
The problem that I'm having is that my pod is returning a hostname of my-pod-5ff75ddd86-2xdjq which the VM cannot resolve. I'm wondering if it's possible to set the hostname of the pod to its IP, as this would mean I don't have to change any code in my pod, or the tool running in my VM.
In case you don't want to change your code you need to expose the internal kube-dns
of your cluster and make it the default DNS server of your VM.
On that another question have some info
How to expose kube-dns service for queries outside cluster?
On the nginx-controller documentation you have a good example of exposing the 53 port on UDP
https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/
By having a nginx-controller up and running you have to create an ingress correctlly to be used by this controller and simply add this configMap
:
apiVersion: v1
kind: ConfigMap
metadata:
name: udp-services
namespace: ingress-nginx
data:
53: "kube-system/kube-dns:53"
Depending on your cluster setup, your VM probably wont be able to connect to the pod either even with its IP. By default, pod IPs are on an overlay network that is only accessible form inside the cluster.
If the VM is in the cluster, are you sure you're referencing the pods hostname correctly against kube DNS? Here are the docs on DNS for pods.