I have a container image that is loading multiple large files on startup. When restarting the container, all files have to be loaded again.
What I want to do now is to start six instances which only load one file each, given an environment variable. Now my question is how to configure this. What I could do is create a new deployment+service for each file, but that seems incorrect because 99% of the content is the same, only the environment variable is different. Another option would be to have one pod with multiple containers and one gateway-like containers. But then when the pod is restarting, all files are loaded again.
What's the best strategy to do this?
Although this does not directly address the question, I am delivering an answer to how you could create multiple container instances from one deployment that only different in environmental variables because this question pops up if googling said approach. For your question the answer of Harsh Manvar is the correct kubernetes approved way of handling that. I had the same problem and found a solution which needs to be refined a bit more. With Helm you can specify a key-value pair array inside your values.yaml, which would look like this:
envVars:
key1: value1
key2: value2
Now you need to modify your deployment.yaml to loop over this array and inject the values as environmental variables into your container :
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "chart.fullname" . }}
labels:
{{- include "chart.labels" . | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "chart.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "chart.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "chart.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
{{- range $key, $value := .Values.envVars }}
- name: {{ $.Chart.Name }}-{{ $key }}
securityContext:
{{- toYaml $.Values.securityContext | nindent 12 }}
image: "{{ $.Values.image.repository }}:{{ $.Values.image.tag | default $.Chart.AppVersion }}"
env:
- name: envVarName
value: {{ $value | quote }}
imagePullPolicy: {{ $.Values.image.pullPolicy }}
ports:
- name: http
containerPort: 80
protocol: TCP
resources:
{{- toYaml $.Values.resources | nindent 12 }}
{{- end }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
There are some notes that need to be considered tho:
{{- range $key, $value := .Values.envVars }}
inside the containers spec to create multiple containers inside one pod. This may not be wanted due to ressource restrictions$key
and $value
as specified in the range command.Values.envVars
every other call outside of that scope needs to be made from the root scope. In Helm the root scope of the values file is called with the $ sign e.g. {{ $.Chart.Name }}
- name: {{ $.Chart.Name }}-{{ $key }}
portion.Anyway, this took me quite some time to crack, I hope it is helpful to someone.
Ideally, you should have to keep it like deployment+service and make 5-6 different secret or configmap as per need storing the environment variables files your application require.
Inject this secret
or configmap
one by one to each different deployment.
Another option would be to have one pod with multiple containers and one gateway-like containers.
that's didn't look like scalable approach, if you are running the 5- container inside single pod and one gateway container also.