I hope everyone here is doing good. I am trying to find a way to add entries to the containers /etc/hosts file while spinning up a pod. I was just wondering to know if there is any option/parameter that I could mention in my "pod1.json" which adds the entries to the containers /etc/hosts when its being created. Something like "--add-host node1.example.com:${node1ip}" that serves the same purpose for docker as shown below.
docker run \
--name mongo \
-v /home/core/mongo-files/data:/data/db \
-v /home/core/mongo-files:/opt/keyfile \
--hostname="node1.example.com" \
--add-host node1.example.com:${node1ip} \
--add-host node2.example.com:${node2ip} \
--add-host node3.example.com:${node3ip} \
-p 27017:27017 -d mongo:2.6.5 \
--smallfiles \
--keyFile /opt/keyfile/mongodb-keyfile \
--replSet "rs0"
Any pointers are highly appreciated. Thank you.
Regards, Aj
Kubernetes uses the IP-per-pod model. If I understand correctly, you want to create three mongo pods, and write IP addresses of the three pods in /etc/hosts
of each container. Modifying the /etc/host
files directly might not be a good idea for many reasons (e.g., the pod may die and be replaced).
For peer discovery in kubernetes, you need to
(1) is achievable using Headless Service. (2) requires your to write a sidecar container to run along side with your mongo containers, performs (1), and configures your application. The sidecar container is highly application-specific and you may want to read some related stackoverflow questions about doing this for mongodb.
As for (1), you can create a Headless Service by using this service.yaml with the clusterIP set to None.
spec:
clusterIP: None
Then, you can create a replication controller which creates the desired number of mongo pods. For example, you can use mongo-controller.yaml, replaces the gcePersistentDisk
with a desired local disk volume type (e.g. emptyDir
or hostPath
), and change the replica number to 3.
Each of the mongo pod will be assigned an IP address automatically, and is labeled with name=mongo
. The headless service uses a label selector to find the pods. When querying DNS with the service name from a node or a container, it will return a list of IP addresses of the mongo pods.
E.g.,
$ host mongo
mongo.default.svc.cluster.local has address 10.245.0.137
mongo.default.svc.cluster.local has address 10.245.3.80
mongo.default.svc.cluster.local has address 10.245.1.128
You can get the addresses in the sidecar container you wrote and configure mongodb-specific accordingly.
You can actually do this in the way that you initially were expecting to.
Thanks to this answer for helping me get there - https://stackoverflow.com/a/33888424/370364
You can use the following approach to shove hosts into your container's /etc/hosts file
command: ["/bin/sh","-c"]
args: ["echo '192.168.200.200 node1.example.com' >> /etc/hosts && commandX"]
If you want to dynamically set the ip at pod start time you can create a pod from stdin and pass it through sed to perform substitution before passing it to kubectl.
So the pod yaml would look like the following
command: ["/bin/sh","-c"]
args: ["echo 'NODE_1_IP node1.example.com' >> /etc/hosts && commandX"]
Then execute it with
cat pod1.yaml | sed -- "s|NODE_1_IP|${node1ip}|" | kubectl create -f -
I realise that this is not way that kubernetes intended for this kind of thing to be achieved but we are using this for starting up a test pod locally and we need to point it at the default network device on the local machine. Creating a service just to satisfy the test pod seems like overkill. So we do this instead.