I'm learning k8s and am struggling with writing Helm charts to generate config files for an application I'm using to ramp up on the ecosystem. I've hit an interesting issue where I need to generate configs that are common to all nodes, and some that are unique to each node. Any idea how I would do this?
From my values.xml file:
# number of nodes / replicas
nodeCount: 5
replicaCount: 3
The common config across all nodes called node_map.xml:
<default>
<node>
<replica>
<host>wild-wallaby-0</host>
<port>8000</port>
</replica>
<replica>
<host>scary-rapids-1</host>
<port>8000</port>
</replica>
</node>
<node>
<replica>
<host>wild-wallaby-1</host>
<port>8000</port>
</replica>
<replica>
<host>scary-rapids-2</host>
<port>8000</port>
</replica>
</node>
<node>
<replica>
<host>wild-wallaby-2</host>
<port>8000</port>
</replica>
<replica>
<host>scary-rapids-0</host>
<port>8000</port>
</replica>
</node>
</default>
The above is easy enough to generate, and this config is ready be every pod that is a single container, but now each pod also needs to have an additional config file written letting that pod know what node and replica that instance is, called instance.xml. Note that the file doesn't need to be called instance.xml... I have the flexibility to point at and load any named file as long as I know what name to include in the start command.
For example...
Two instances would run on node wild-wallaby-0, node 0 replica 1, and node 0 replica 2. Each instance would need config files generated as such:
The first instance...
<!-- node 0 replica 1 instance.xml -->
<id>
<node>0</node>
<replica>1</replica>
</id>
And the second instance ...
<!-- node 0 replica 2 instance.xml -->
<id>
<node>0</node>
<replica>2</replica>
</id>
This of course can follow some convention based on the number of nodes and replicas defined in my values file. While it's easy to generate the file that's common across all nodes, it's not clear to me how I can generate a custom config file for each node from a helm chart for the instance.xml files.
Any ideas or pointers?
You can deploy this as a StatefulSet, and use an initContainers:
to create the config file before the main task of the pod really starts up.
The Kubernetes documentation has a fairly detailed of example of this oriented around a replicated MySQL cluster, but with the same essential setup: there is a master node and some number of replicas, each needs to know its own ID, and the config files are different on the master and replicas.
It looks like the important detail you can work from is that the pod's hostname
(as in the shell command) is statefulsetname-123
, where the numbers are sequential and the individual pods are guaranteed to be started in order. The same detail is in a `statefulset.kubernetes.io/pod-name' label, which you can retrieve via the downward API.
I might create a ConfigMap like:
version: v1
kind: ConfigMap
metadata:
name: config-templates
data:
config.xml.tmpl: >-
<id>
<node>NODE</node>
<replica>REPLICA</replica>
</id>
And then my StatefulSet spec could look like, in part:
version: apps/v1
kind: StatefulSet
...
spec:
...
template:
spec:
volumes:
- name: config
emptyDir: {}
- name: templates
configMap:
name: config-templates
initContainers:
- name: configfiles
image: ubuntu:16.04
command:
- sh
- -c
- |
POD_NUMBER=$(hostname | sed 's/.*-//')
NODE=$(( $POD_NUMBER / 5 ))
REPLICA=$(( $POD_NUMBER % 5 ))
sed -e "s/NODE/$NODE/g" -e "s/REPLICA/$REPLICA/g" \
/templates/config.xml.tmpl > /config/config.xml
volumeMounts:
- name: templates
mountPath: /templates
- name: config
mountPath: /config
containers:
- name: ...
...
volumeMounts:
- name: config
mountPath: /opt/myapp/etc/config
In that setup you ask Kubernetes to create an empty temporary volume (config
) that's shared between the containers, and make the config map available as a volume too. The init container extracts the sequential pod ID, splits it into the two numbers, and writes the actual config file into the temporary volume. Then the main container mounts the shared config directory into wherever it expects its config files to be.