I am new to Kubernetes and Helm. I am coming from a plain Docker/docker-compose world.
I have some complex services running multiple Docker containers that require a lot of configuration parameters and logic. The docker-ized services require a lot of different configuration files, keys and command line arguments on start up. I also require some configuration logic at runtime (some configuration elements have to be generated) that can only execute inside of the container.
What I ended up doing is to write a shell script (to use as CMD
) that expects environment variables, defines default values, translates those environment variables to command arguments and configuration files.
This is a non-working example of how I build it, without having Kubernetes and Helm in mind.
Dockerfile
...
CMD [ "./bootstrap.sh" ]
bootstrap.sh (packaged in image)
# Define default values, if no environment variables provided on
# on "docker run"
export CONFIG_VALUE_A=${CONFIG_VALUE_A:="a"}
export CONFIG_VALUE_B=${CONFIG_VALUE_B:="b"}
export CONFIG_VALUE_C=${CONFIG_VALUE_C:="c"}
# write CONFIG_VALUE_A to file
echo ${CONFIG_VALUE_A} > ./some-config-file-a.cfg
ARGS="--config-file-a ./some-config-file-a.cfg --config-value-b ${CONFIG_VALUE_B} --config-value-c ${CONFIG_VALUE_C}"
exec ./my-app ${ARGS}
This has the advantage that using the environment variables, I have a standard configuration interface and don't need to deal with volumes for configuration files.
Now, I am stepping into Kubernetes in Helm. Helm has its own parameter concept using the values.yaml
. To combine it with what I already have above, I would just map values from the values.yaml
with those environment variables.
deployment.yaml
...
spec:
...
template:
...
spec:
containers:
- name: my-app
...
env:
- name: "CONFIG_VALUE_A"
value: {{ .Values.config.value_a }}
- name: "CONFIG_VALUE_B"
value: {{ .Values.config.value_b }}
- name: "CONFIG_VALUE_C"
value: {{ .Values.config.value_c }}
values.yaml
config:
value_a: a
value_b: b
value_c: c
However, having three configuration layers where I map values back and forth (helm templates => Container environment variables => Config files/CLI arguments) violates the DRY principle and adds a lot of potential for typos/errors that will be hard to find later.
Ideally,
deployment.yaml
and my defaults once in Helm's default.yaml
How do you solve complex configuration management with Kubernetes, Helm and Docker?
In the Kubernetes world, configs are usually managed by ConfigMap, which is the main storage of configuration.
In your situation, I think you can do it like that (at least if I will do it, I will do it that way):
.cfg
file for an application. Helm is using GoTemplate format, so it is easy to create any structures there, with iterations etc.values.yaml
file.deployment.yaml
. Add a mounting of .cfg
file to a path in the container and point the application to it.So, that's it. We have:
key: value
YAML format.