How to use a unique value in a Kubernetes ConfigMap

7/17/2017

Problem

I have a monitoring application that I want to deploy inside of a DaemonSet. In the application's configuration, a unique user agent is specified to separate the node from other nodes. I created a ConfigMap for the application, but this only works for synchronizing the other settings in the environment.

Ideal solution?

I want to specify a unique value, like the node's hostname or another locally-inferred value, to use as the user agent string. Is there a way I can call this information from the system and Kubernetes will populate the desired key with a value (like the hostname)?

Does this make sense, or is there a better way to do it? I was looking through the documentation, but I could not find an answer anywhere for this specific question.

As an example, here's the string in the app config that I have now, versus what I want to use.

user_agent = "app-k8s-test"

But I'd prefer…

user_agent = $HOSTNAME

Is something like this possible?

-- Justin
configuration
kubernetes

1 Answer

7/17/2017

You can use an init container to preprocess a config template from a config map. The preprocessing step can inject local variables into the config files. The expanded config is written to an emptyDir shared between the init container and the main application container. Here is an example of how to do it.

First, make a config map with a placeholder for whatever fields you want to expand. I used sed and and ad-hoc name to replace. You can also get fancy and use jinja2 or whatever you like. Just put whatever pre-processor you want into the init container image. You can use whatever file format for the config file(s) you want. I just used TOML here to show it doesn't have to be YAML. I called it ".tpl" because it is not ready to use: it has a string, _HOSTNAME_, that needs to be expanded.

$ cat config.toml.tpl 
[blah]
blah=_HOSTNAME_
otherkey=othervalue
$ kubectl create configmap cm --from-file=config.toml.tpl
configmap "cm" created

Now write a pod with an init container that mounts the config map in a volume, and expands it and writes to another volume, shared with the main container:

$ cat personalized-pod.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: myapp-pod-5
  labels:
    app: myapp
  annotations:
spec:
  containers:
  - name: myapp-container
    image: busybox
    command: ['sh', '-c', 'echo The app is running and my config-map is && cat /etc/config/config.toml && sleep 3600']
    volumeMounts:
      - name: config-volume
        mountPath: /etc/config
  initContainers:
  - name: expander
    image: busybox
    command: ['sh', '-c', 'cat /etc/config-templates/config.toml.tpl | sed "s/_HOSTNAME_/$MY_NODE_NAME/" > /etc/config/config.toml']
    volumeMounts:
      - name: config-tpl-volume
        mountPath: /etc/config-templates
      - name: config-volume
        mountPath: /etc/config
    env:
      - name: MY_NODE_NAME
        valueFrom:
          fieldRef:
            fieldPath: spec.nodeName
  volumes:
    - name: config-tpl-volume
      configMap:
        name: cm
    - name: config-volume
      emptyDir:
$ kubctl create -f personalized-pod.yaml
$ sleep 10
$ kubectl logs myapp-pod
The app is running and my config-map is
[blah]
blah=gke-k0-default-pool-93916cec-p1p6
otherkey=othervalue

I made this a bare pod for an example. You can embed this type of pod in a DaemonSet's pod template.

Here, the Downward API is used to set the MY_NODE_NAME Environment Variable, since the Node Name is not otherwise readily available from within a container.

Note that for some reason, you can't get the spec.nodeName into a file, just an env var.

If you just need the hostname in an Env Var, then you can skip the init container.

Since the Init Container only runs once, you should not update the configMap and expect it to be reexpanded. If you need updates, you can do one of two things:

  • Instead of an init container, run a sidecar that watches the config map volume and re-expands when it changes (or just does it periodically). This requires that the main container also know how to watch for config file updates.

  • You can just make a new config map each time the config template changes, and edit the daemonSet to change the one line to point to a new config map. And then do a rolling update to use the new config.

-- Eric Tune
Source: StackOverflow