Add host mapping to /etc/hosts in Kubernetes

12/24/2016

I have an issue with the DNS mapping in kubernetes. Please see the details,

We have some servers which can be accessed from internet. The global DNS translates these servers's domain names to public internet IPs. Some services can't access through public IPs for security consideration.

From company internal, we add the DNS mappings with private IPs to /etc/hosts inside docker containers managed by kubernetes to access these servers manually.

I know that docker supports command --add-host to change the /etc/hosts when executing "docker run". I'm not sure if this command supported in latest kubernetes, such as kuber 1.4 or 1.5 ?

On the other hand, we can wrap the startup script for the docker container,

  • append the mappings to /etc/hosts firstly
  • start our application

I only want to change the file once after first run in each container. Is there an easy way to do this because the mapping relations maybe different between develop and production environments or any commands related to this provided by kubernetes itself?

Appreciate for you help.

-- qingdaojunzuo
dns
docker
kubernetes

4 Answers

1/2/2017

To add a hostname to the hosts file in a "semi" dynamic fashion, one can use the postStart hook:

spec:
  containers:
  - name: somecontainer
    image: someimage
    lifecycle:
      postStart:
        exec:
          command:
            - "cat"
            - "someip"
            - "somedomain"
            - ">"
            - "/etc/hosts"

A better way would be however to use an abstract name representing the service in multiple stages. For example instead of using database01.production.company.com use database01 and setup the environment such that this resolves to production in the production setting and staging in the staging setting.

Lastly it is also possible to edit the kubedns settings such that the kubernetes DNS can be used to retrieve external DNS names. Then you would just use whatever name you need in the code, and it just "automagically" works. See for example https://github.com/kubernetes/kubernetes/issues/23474 on how to set this up (varies a bit from version to version of skydns: Some older ones really do not work with this, so upgrade to at least kube 1.3 to make this work properly)

-- Norbert van Nobelen
Source: StackOverflow

4/6/2017

Create a file on the host system(or a secret) with all the extra hosts you need (e.g. /tmp/extra-hosts)

Then in K8S manifest:

spec:
  containers:
    - name: haproxy
      image: haproxy
      lifecycle:
        postStart:
          exec:
            command: ["/bin/sh", "-c", "cat /hosts >> /etc/hosts"]

      volumeMounts:
        - name: haproxy-hosts
          mountPath: /hosts

      volumes:
        - name: haproxy-hosts
          hostPath:
            path: /tmp/extra-hosts
-- warden
Source: StackOverflow

11/27/2018

From kubernetes.io/docs: "In addition to the default boilerplate, we can add additional entries to the hosts file to resolve foo.local, bar.local to 127.0.0.1 and foo.remote, bar.remote to 10.1.2.3, we can by adding HostAliases to the Pod under .spec.hostAliases:"

Also you can "Configure stub-domain and upstream DNS servers".

-- Igor Storozhuk
Source: StackOverflow

12/28/2016

You should be able to add a Service without selectors and manually create the Endpoint object, as described in the docs here - http://kubernetes.io/docs/user-guide/services/#services-without-selectors

Services generally abstract access to Kubernetes Pods, but they can also abstract other kinds of backends.

For example:

You want to have an external database cluster in production, but in test you use your own databases.

You want to point your service to a service in another Namespace or on another cluster.

You are migrating your workload to Kubernetes and some of your backends run outside of Kubernetes.

-- manojlds
Source: StackOverflow