The dilemma: Deploy multiple app and database container pairs with identical docker image and code, but different config (different clients using subdomains).
What are some logical ways to approach this, as it doesn't seem kubernetes has an integration that would support this kind of setup?
Possible Approaches
Ideally if a StatefulSet could use the downward api to dynamically choose a configmap name based on the index of the stateful set that would resolve the issue (where you would basically make your config files manually with the index in the name, and it would be selected appropriately). Something like:
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
envFrom:
- configMapRef:
name: $(POD_NAME)-config
However that functionality isn't available in kubernetes.
While dynamic structural replacement isn't possible (plus or minus, see below for the whole story), I believe you were in the right ballpark with your initContainer:
thought; you can use the serviceAccount
to fetch the configMap
from the API in an initContainer:
and then source that environment on startup by the main container:
initContainers:
- command:
- /bin/bash
- -ec
- |
curl -o /whatever/env.sh \
-H "Authorization: Bearer $(cat /var/run/secret/etc/etc)" \
https://${KUBERNETES_SERVICE_HOST}/api/v1/namespaces/${POD_NS}/configmaps/${POD_NAME}-config
volumeMounts:
- name: cfg # etc etc
containers:
- command:
- /bin/bash
- -ec
- "source /whatever/env.sh; exec /usr/bin/my-program"
volumeMounts:
- name: cfg # etc etc
volumes:
- name: cfg
emptyDir: {}
Here we have the ConfigMap
fetching inline with the PodSpec
, but if you had a docker container specialized for fetching ConfigMap
s and serializing them into a format that your main containers could consume, I wouldn't expect the actual solution to be nearly that verbose
A separate, and a lot more complicated (but perhaps elegant) approach is a Mutating Admission Webhook, and it looks like they have even recently formalized your very use case with Pod Presets but it wasn't super clear from the documentation in which version that functionality first appeared, nor if there are any apiserver flags one must twiddle to take advantage of it.
A templating engine like Helm can help with this. (I believe Kustomize, which ships with current Kubernetes, can do this too, but I'm much more familiar with Helm.) The basic idea is that you have a chart that contains the Kubernetes YAML files but can use a templating language (the Go text/template
library) to dynamically fill in content.
In this setup generally you'd have Helm create both the ConfigMap and the matching Deployment; in the setup you describe you'd install it separately (a Helm release) for each tenant. Say the Nginx configurations were different enough that you wanted to store them in external files; the core parts of your chart would include
values.yaml (overridable configuration, helm install --set nginxConfig=bar.conf
):
# nginxConfig specifies the name of the Nginx configuration
# file to embed.
nginxConfig: foo.conf
templates/configmap.yaml:
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}-{{ .Chart.Name }}-config
data:
nginx.conf: |-
{{ .Files.Get .Values.nginxConfig | indent 4 }}
deployment.yaml:
apiVersion: v1
kind: Deployment
metadata:
name: {{ .Release.Name }}-{{ .Chart.Name }}-nginx
spec:
...
volumes:
- name: nginx-config
configMap:
name: {{ .Release.Name }}-{{ .Chart.Name }}-config
The {{ .Release.Name }}-{{ .Chart.Name }}
is a typical convention that allows installing multiple copies of the chart in the same namespace; the first part is a name you give the helm install
command and the second part is the name of the chart itself. You can also directly specify the ConfigMap content, referring to other .Values...
settings from the values.yaml
file, use the ConfigMap as environment variables instead of files, and so on.