I am creating two pods with a custom docker image(ubuntu is the base image). I am trying to ping the pods from their terminal. I am able to reach it using the IP address but not the hostname. How to achieve without manually adding /etc/hosts
in the pods?
Note: I am not running any services in the node. I am basically trying to setup slurm using this.
Pod Manifest File:
apiVersion: v1
kind: Pod
metadata:
name: slurmctld
labels:
app: slurm
spec:
nodeName: docker-desktop
hostname: slurmctld
containers:
- name: slurmctld
image: slurmcontroller
imagePullPolicy: Always
ports:
- containerPort: 6817
resources:
requests:
memory: "1000Mi"
cpu: "1000m"
limits:
memory: "1500Mi"
cpu: "1500m"
command: [ "/bin/bash", "-c", "--" ]
args: [ "while true; do sleep 30; done;" ]
---
apiVersion: v1
kind: Pod
metadata:
name: worker1
labels:
app: slurm
spec:
nodeName: docker-desktop
hostname: worker1
containers:
- name: worker1
image: slurmworker
imagePullPolicy: Always
ports:
- containerPort: 6818
resources:
requests:
memory: "1000Mi"
cpu: "1000m"
limits:
memory: "1500Mi"
cpu: "1500m"
command: [ "/bin/bash", "-c", "--" ]
args: [ "while true; do sleep 30; done;" ]
From the docs here
In general a pod has the following DNS resolution:
pod-ip-address.my-namespace.pod.cluster-domain.example
.For example, if a pod in the
default
namespace has the IP address172.17.0.3
, and the domain name for your cluster iscluster.local
, then the Pod has a DNS name:
172-17-0-3.default.pod.cluster.local
.Any pods created by a Deployment or DaemonSet exposed by a Service have the following DNS resolution available:
pod-ip-address.deployment-name.my-namespace.svc.cluster-domain.example
If you don't like to deal with ever changing IP of a pod then you need to create service to expose the pods using DNS hostnames. Below is an example of service to expose the slurmctld
pod.
apiVersion: v1
kind: Service
metadata:
name: slurmctld-service
spec:
selector:
app: slurm
ports:
- protocol: TCP
port: 80
targetPort: 6817
Assuming you are doing these on default
namespace You should now be able to access it via slurmctld-service.default.svc.cluster.local