I have a helm deployment which deploys 2 containers pod.
Now I need to include init container to one of the container pod.
I'm new to helm. Kindly share the snippet to achieve this. Here under spec I have defined 2 containers in which container 1 is dependent on container 2. So container 2 should be up and then I need to run init container for container 1.
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "test.fullname" . }}
namespace: {{ .Values.global.namespace }}
labels:
{{- include "test.labels" . | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "testLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "test.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ .Values.cloudsqlproxySa }}
automountServiceAccountToken: true
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ .Chart.Name }} # For this I need to include the init container.
securityContext:
{{- toYaml .Values.test.securityContext | nindent 12 }}
image: "{{ .Values.test.image.repository }}:{{ .Values.test.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.test.image.pullPolicy }}
ports:
- name: {{ .Values.test.port.name }}
containerPort: {{ .Values.test.port.containerPort }}
protocol: {{ .Values.test.port.protocol }}
livenessProbe:
httpGet:
path: /
port: {{ .Values.test.port.containerPort }}
readinessProbe:
httpGet:
path: /
port: {{ .Values.test.port.containerPort }}
envFrom:
- configMapRef:
name: {{ .Values.configmap.name }}
resources:
{{- toYaml .Values.test.resources | nindent 12 }}
volumeMounts:
- name: gcp-bigquery-credential-file
mountPath: /secret
readOnly: true
- name: {{ .Chart.Name }}-gce-proxy
securityContext:
{{- toYaml .Values.cloudsqlproxy.securityContext | nindent 12 }}
image: "{{ .Values.cloudsqlproxy.image.repository }}:{{ .Values.cloudsqlproxy.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.cloudsqlproxy.image.pullPolicy }}
command:
- "/cloud_sql_proxy"
- "-instances={{ .Values.cloudsqlConnection }}=tcp:{{ .Values.cloudsqlproxy.port.containerPort }}"
ports:
- name: {{ .Values.cloudsqlproxy.port.name }}
containerPort: {{ .Values.cloudsqlproxy.port.containerPort }}
resources:
{{- toYaml .Values.cloudsqlproxy.resources | nindent 12 }}
volumeMounts:
- name: gcp-bigquery-credential-file
mountPath: /secret
readOnly: true
volumes:
- name: gcp-bigquery-credential-file
secret:
secretName: {{ .Values.bigquerysecret.name }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
Posting this as a community wiki out of comments, feel free to edit and expand.
As @anemyte responded in comments, it's not possible to start init container after the main container is started, this is the logic behind init-containers. Understanding init-containers
Possible solution for this from @DavidMaze is to separate containers into different deployments and setup a container with application to restart itself until proxy container is up and running. Full quote:
If the init container exits with an error if it can't reach the proxy container, and you run the proxy container in a separate deployment, then you can have a setup where the application container restarts until the proxy is up and running. That would mean splitting this into two separate files in the
templates
directory