Use hosts file for docker-in-docker container in kubernetes pod

10/9/2018

What I want:

Dynamically add hosts in kubernetes pods created by Jenkins master and allow the mounted docker in the pod to use the hosts.

I am using Jenkins to create dynamic slaves for docker build, and created docker-in-docker slave containers for docker build and docker push. The docker-in-docker is created by mounting docker.sock of the VM to the container:

volumeMounts:
- name: hostpathvolume
  mountPath: '/var/run/docker.sock'

I am using hostAliases of kubernetes to append the hosts file with some private docker registries:

hostAliases:
- ip: 9.110.73.11
  hostnames:
  - devopsprod.icp
- ip: 9.119.42.60
  hostnames:
  - devops.icp

I have confirmed that the pods created have these hosts in /etc/hosts, but when I run:

docker login -u xxx -p xxx devops.icp:8500

I got a DNS error:

Error response from daemon: Get https://devops.icp:8500/v2/: dial tcp: lookup devops.icp on 0.0.0.0:00: no such host

This means the docker I run in the container is not using the /etc/hosts in the pod to look up for the ip address. Is there a way to fix this? I don't want to add the hosts manually in the VM's /etc/hosts file.

-- Eden Li
docker
jenkins
kubernetes

1 Answer

10/9/2018

You have mounted docker.sock into the Pod, but Docker still uses the configuration from the Node, not from Pod. There are no other options, you need to add aliases to /etc/hosts on each Node to make it work.

-- Artem Golenyaev
Source: StackOverflow