Kubernetes multiple identical app and database deployments with different config

1/23/2020

The dilemma: Deploy multiple app and database container pairs with identical docker image and code, but different config (different clients using subdomains).

What are some logical ways to approach this, as it doesn't seem kubernetes has an integration that would support this kind of setup?

Possible Approaches

  1. Use a single app service for all app deployments, a single database service for all database deployments. Have a single Nginx static file service and deployment running, that will serve static files from a static volume that is shared between the app deployments (all use the same set of static files). Whenever a new deployment is needed, have a bash script copy the app and database .yaml deployment files and sed text replace with the name of the client, and point to the correct configmap (which is manually written ofcourse) and kubectl apply them. A main nginx ingress will handle the incoming traffic and point to the correct pod through the app deployment service
  2. Similar to the above except use a StatefulSet instead of separate deployments, and an init container to copy different configs to mounted volumes (only drawback is you cannot delete an item in the middle of a statefulset, which would be the case if you no longer need a specific container for a client, as well as this seems like a very hacky approach).

Ideally if a StatefulSet could use the downward api to dynamically choose a configmap name based on the index of the stateful set that would resolve the issue (where you would basically make your config files manually with the index in the name, and it would be selected appropriately). Something like:

env:
- name: POD_NAME
  valueFrom:
    fieldRef:
      fieldPath: metadata.name

envFrom:
- configMapRef:
  name: $(POD_NAME)-config

However that functionality isn't available in kubernetes.

-- Mick
configmap
deployment
docker
kubernetes
statefulset

2 Answers

1/23/2020

While dynamic structural replacement isn't possible (plus or minus, see below for the whole story), I believe you were in the right ballpark with your initContainer: thought; you can use the serviceAccount to fetch the configMap from the API in an initContainer: and then source that environment on startup by the main container:

initContainers:
- command:
  - /bin/bash
  - -ec
  - |
       curl -o /whatever/env.sh \
       -H "Authorization: Bearer $(cat /var/run/secret/etc/etc)" \
       https://${KUBERNETES_SERVICE_HOST}/api/v1/namespaces/${POD_NS}/configmaps/${POD_NAME}-config
  volumeMounts:
  - name: cfg  # etc etc
containers:
- command:
  - /bin/bash
  - -ec
  - "source /whatever/env.sh; exec /usr/bin/my-program"
  volumeMounts:
  - name: cfg  # etc etc
volumes:
- name: cfg
  emptyDir: {}

Here we have the ConfigMap fetching inline with the PodSpec, but if you had a docker container specialized for fetching ConfigMaps and serializing them into a format that your main containers could consume, I wouldn't expect the actual solution to be nearly that verbose


A separate, and a lot more complicated (but perhaps elegant) approach is a Mutating Admission Webhook, and it looks like they have even recently formalized your very use case with Pod Presets but it wasn't super clear from the documentation in which version that functionality first appeared, nor if there are any apiserver flags one must twiddle to take advantage of it.

-- mdaniel
Source: StackOverflow

1/23/2020

A templating engine like Helm can help with this. (I believe Kustomize, which ships with current Kubernetes, can do this too, but I'm much more familiar with Helm.) The basic idea is that you have a chart that contains the Kubernetes YAML files but can use a templating language (the Go text/template library) to dynamically fill in content.

In this setup generally you'd have Helm create both the ConfigMap and the matching Deployment; in the setup you describe you'd install it separately (a Helm release) for each tenant. Say the Nginx configurations were different enough that you wanted to store them in external files; the core parts of your chart would include

values.yaml (overridable configuration, helm install --set nginxConfig=bar.conf):

# nginxConfig specifies the name of the Nginx configuration
# file to embed.
nginxConfig: foo.conf

templates/configmap.yaml:

apiVersion: v1
kind: ConfigMap
metadata:
  name: {{ .Release.Name }}-{{ .Chart.Name }}-config
data:
  nginx.conf: |-
{{ .Files.Get .Values.nginxConfig | indent 4 }}

deployment.yaml:

apiVersion: v1
kind: Deployment
metadata:
  name: {{ .Release.Name }}-{{ .Chart.Name }}-nginx
spec:
  ...
    volumes:
      - name: nginx-config
        configMap:
          name: {{ .Release.Name }}-{{ .Chart.Name }}-config

The {{ .Release.Name }}-{{ .Chart.Name }} is a typical convention that allows installing multiple copies of the chart in the same namespace; the first part is a name you give the helm install command and the second part is the name of the chart itself. You can also directly specify the ConfigMap content, referring to other .Values... settings from the values.yaml file, use the ConfigMap as environment variables instead of files, and so on.

-- David Maze
Source: StackOverflow