I am trying to get dns pod name resolution working on my EKS Kubernetes cluster v1.10.3. My understanding is that creating a headless service will create the necessary pod name records I need but I'm finding this is not true. Am I missing something?
Also open to other ideas on how to get this working. Could not find alternate solution.
I wasn't really clear enough. Essentially what I need is to resolved as such: worker-767cd94c5c-c5bq7 -> 10.0.10.10 worker-98dcd94c5d-cabq6 -> 10.0.10.11 and so on....
I don't really need a round robin DNS just read somewhere that this could be a work around. Thanks!
# my service
apiVersion: v1
kind: Service
metadata:
...
name: worker
namespace: airflow-dev
resourceVersion: "374341"
selfLink: /api/v1/namespaces/airflow-dev/services/worker
uid: 814251ac-acbe-11e8-995f-024f412c6390
spec:
clusterIP: None
ports:
- name: worker
port: 8793
protocol: TCP
targetPort: 8793
selector:
app: airflow
tier: worker
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
# my pod
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: 2018-08-31T01:39:37Z
generateName: worker-69887d5d59-
labels:
app: airflow
pod-template-hash: "2544381815"
tier: worker
name: worker-69887d5d59-6b6fc
namespace: airflow-dev
ownerReferences:
- apiVersion: extensions/v1beta1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: worker-69887d5d59
uid: 16019507-ac6b-11e8-995f-024f412c6390
resourceVersion: "372954"
selfLink: /api/v1/namespaces/airflow-dev/pods/worker-69887d5d59-6b6fc
uid: b8d82a6b-acbe-11e8-995f-024f412c6390
spec:
containers:
...
...
name: worker
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
...
...
dnsPolicy: ClusterFirst
nodeName: ip-10-0-1-226.us-west-2.compute.internal
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: airflow
serviceAccountName: airflow
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
...
...
status:
conditions:
- lastProbeTime: null
lastTransitionTime: 2018-08-31T01:39:37Z
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: 2018-08-31T01:39:40Z
status: "True"
type: Ready
- lastProbeTime: null
lastTransitionTime: 2018-08-31T01:39:37Z
status: "True"
type: PodScheduled
containerStatuses:
...
...
lastState: {}
name: worker
ready: true
restartCount: 0
state:
running:
startedAt: 2018-08-31T01:39:39Z
hostIP: 10.0.1.226
phase: Running
podIP: 10.0.1.234
qosClass: BestEffort
startTime: 2018-08-31T01:39:37Z
# querying the service dns record works!
airflow@worker-69887d5d59-6b6fc:~$ nslookup worker.airflow-dev.svc.cluster.local
Server: 172.20.0.10
Address: 172.20.0.10#53
Name: worker.airflow-dev.svc.cluster.local
Address: 10.0.1.234
# querying the pod name does not work :(
airflow@worker-69887d5d59-6b6fc:~$ nslookup worker-69887d5d59-6b6fc.airflow-dev.svc.cluster.local
Server: 172.20.0.10
Address: 172.20.0.10#53
** server can't find worker-69887d5d59-6b6fc.airflow-dev.svc.cluster.local: NXDOMAIN
airflow@worker-69887d5d59-6b6fc:~$ nslookup worker-69887d5d59-6b6fc.airflow-dev.pod.cluster.local
Server: 172.20.0.10
Address: 172.20.0.10#53
*** Can't find worker-69887d5d59-6b6fc.airflow-dev.pod.cluster.local: No answer
Internally, I suggest using the service DNS records to point to the pod, which you already confirmed works. This of course does not require you to have a Headless service to use service DNS.
The kube-dns automatic records work in the following way:
pod -> service in the same namespace: curl http://servicename
pod -> service in a different namespace: curl http://servicename.namespace
Read more about service discovery here: https://kubernetes.io/docs/concepts/services-networking/service/#environment-variables
You can read more about DNS records for services here https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#services
If you need custom name resolution externally I recommend using nginx-ingress:
https://github.com/helm/charts/tree/master/stable/nginx-ingress https://github.com/kubernetes/ingress-nginx