replacing property in data section of ConfigMap at runtime with Environment variables in kubernetes

6/28/2021

my current setup involves Helm charts,Kubernetes

I have a requirement where i have to replace a property in configMap.yaml file with an environment variable declared in the deployment.yaml file

here is a section my configMap.yaml which declares a property file

data:

  rest.properties: |

    map.dirs=/data/maps

    catalog.dir=/data/catalog

    work.dir=/data/tmp

    map.file.extension={{ .Values.rest.mapFileExtension }}

    unload.time=1

    max.flow.threads=10

    max.map.threads=50

    trace.level=ERROR

    run.mode={{ .Values.runMode }}

    {{- if eq .Values.cache.redis.location "external" }}

    redis.host={{ .Values.cache.redis.host }}

    {{- else if eq .Values.cache.redis.location "internal" }}

    redis.host=localhost

    {{- end }}

    redis.port={{ .Values.cache.redis.port }}

    redis.stem={{ .Values.cache.redis.stem }}

    redis.database={{ .Values.cache.redis.database }}

    redis.logfile=redis.log

    redis.loglevel=notice

    exec.log.dir=/data/logs

    exec.log.file.count=5

    exec.log.file.size=100

    exec.log.level=all

    synchronous.timeout=300

    {{- if .Values.global.linkIntegration.enabled }}

    authentication.enabled=false

    authentication.server=https://{{ .Release.Name }}-product-design-server:443

    config.dir=/opt/runtime/config

    {{- end }}

    {{- if .Values.keycloak.enabled }}

    authentication.keycloak.enabled={{ .Values.keycloak.enabled }}

    authentication.keycloak.serverUrl={{ .Values.keycloak.serverUrl }}

    authentication.keycloak.realmId={{ .Values.keycloak.realmId }}

    authentication.keycloak.clientId={{ .Values.keycloak.clientId }}

    authentication.keycloak.clientSecret=${HIP_KEYCLOAK_CLIENT_SECRET}

    {{- end }}

i need to replace ${HIP_KEYCLOAK_CLIENT_SECRET} which is defined in deployment.yaml file as shown below

containers:
        - name: {{ .Chart.Name }}
          image: "{{ .Values.global.hImageRegistry }}/{{ include "image.runtime.repo" . }}:{{ .Values.image.tag }}"
          imagePullPolicy: {{ .Values.image.pullPolicy }}
          env:
            {{- if .Values.keycloak.enabled }}
            - name: HIP_KEYCLOAK_CLIENT_SECRET
              valueFrom:
                secretKeyRef:
                  name: {{ .Values.keycloak.secret }}
                  key: clientSecret
            {{ end }}

the idea is to have the property file in the deployed pod under /opt/runtime/rest.properties

here is my complete deployment.yaml file

apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ include "lnk-service.fullname" . }}
  labels:
    app.kubernetes.io/name: {{ include "lnk-service.name" . }}
    helm.sh/chart: {{ include "lnk-service.chart" . }}
    app.kubernetes.io/instance: {{ .Release.Name }}
    app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
  replicas: {{ .Values.replicaCount }}
  selector:
    matchLabels:
      app.kubernetes.io/name: {{ include "lnk-service.name" . }}
      app.kubernetes.io/instance: {{ .Release.Name }}
  template:
    metadata:
      labels:
        app.kubernetes.io/name: {{ include "lnk-service.name" . }}
        app.kubernetes.io/instance: {{ .Release.Name }}
    spec:
      {{- if .Values.global.hImagePullSecret }}
      imagePullSecrets:
        - name: {{ .Values.global.hImagePullSecret }}
      {{- end }}
      securityContext:
        runAsUser: 998
        runAsGroup: 997
        fsGroup: 997
      volumes:
        - name: configuration
          configMap: 
            name: {{ include "lnk-service.fullname" . }}-server-config
        - name: core-configuration
          configMap: 
            name: {{ include "lnk-service.fullname" . }}-server-core-config
        - name: hch-configuration
          configMap:
            name: {{ include "lnk-service.fullname" . }}-hch-config
        - name: data
          {{- if .Values.persistence.enabled }}
          persistentVolumeClaim:
            {{- if .Values.global.linkIntegration.enabled }}
            claimName: lnk-shared-px
            {{- else }}
            claimName: {{ include "pvc.name" . }}
            {{- end }}
          {{- else }}
          emptyDir: {}
          {{- end }}
        - name: hch-data
          {{- if .Values.global.linkIntegration.enabled }}
          persistentVolumeClaim: 
            claimName: {{ include "unicapvc.fullname" . }}
          {{- else }}
          emptyDir: {}
          {{- end }}
      containers:
        - name: {{ .Chart.Name }}
          image: "{{ .Values.global.hImageRegistry }}/{{ include "image.runtime.repo" . }}:{{ .Values.image.tag }}"
          #command: ['/bin/sh']
          #args: ['-c', 'echo $HIP_KEYCLOAK_CLIENT_SECRET']
          #command: [ "/bin/sh", "-c", "export" ]
          #command: [ "/bin/sh", "-ce", "export" ]
          command: [ "/bin/sh", "-c", "export --;trap : TERM INT; sleep infinity & wait" ]
          #command: ['sh', '-c', 'sed -i "s/REPLACEME/$HIP_KEYCLOAK_CLIENT_SECRET/g" /opt/runtime/rest.properties']
          imagePullPolicy: {{ .Values.image.pullPolicy }}
          env:
            - name: "HIP_CLOUD_LICENSE_SERVER_URL"
              value: {{ include "license.url" . | quote }}
            - name: "HIP_CLOUD_LICENSE_SERVER_ID"
              value: {{ include "license.id" . | quote }}
            {{- if .Values.keycloak.enabled }}
            - name: HIP_KEYCLOAK_CLIENT_SECRET
              valueFrom:
                secretKeyRef:
                  name: {{ .Values.keycloak.secret }}
                  key: clientSecret
            {{ end }}
          envFrom:
            - configMapRef:
                name: {{ include "lnk-service.fullname" . }}-server-env
            {{- if .Values.rest.extraEnvConfigMap }}
            - configMapRef:
                name: {{ .Values.rest.extraEnvConfigMap }}
            {{- end }}
            {{- if .Values.rest.extraEnvSecret }}
            - secretRef:
                name: {{ .Values.rest.extraEnvSecret }}
            {{- end }}
          ports:
            - name: http
              containerPort: {{ .Values.image.port }}
              protocol: TCP
            - name: https
              containerPort: 8443
              protocol: TCP
          volumeMounts:
            - name: configuration
              mountPath: /opt/runtime/rest.properties
              subPath: rest.properties
          resources:
{{ toYaml .Values.resources | indent 12 }}
    {{- with .Values.nodeSelector }}
      nodeSelector:
{{ toYaml . | indent 8 }}
    {{- end }}
    {{- with .Values.affinity }}
      affinity:
{{ toYaml . | indent 8 }}
    {{- end }}
    {{- with .Values.tolerations }}
      tolerations:
{{ toYaml . | indent 8 }}
    {{- end }}

i have tried init containers and replacing the string in rest.properties which works however it involves creating volumes with emptyDir.

can someone kindly help me if there is any simpler way to do this

-- Girish Kumar
kubernetes
kubernetes-helm

3 Answers

7/3/2021
  1. Change your ConfigMap to create the file rest.properties.template.

  2. Use an InitContainer that runs cat rest.properties.template | envsubst > rest.properties. The InitContainer can use any Docker container that includes envsubst.

-- rlandster
Source: StackOverflow

7/10/2021

Thanks for the response ,

solution 1: use init containers Solution 2: we changed the code to read the value from environment variables.

we chose Solution 2

thanks you all for your responses

-- Girish Kumar
Source: StackOverflow

6/28/2021

confd will give you the solution, you can tell it to look at the configmap and change all the environment variables that is expected by the file to the env values that had been set.

-- danny kaplunski
Source: StackOverflow