I have a pre-upgrade hook in my Helm chart that looks like this:
apiVersion: batch/v1
kind: Job
metadata:
name: "{{.Release.Name}}-preupgrade"
labels:
heritage: {{.Release.Service | quote }}
release: {{.Release.Name | quote }}
chart: "{{.Chart.Name}}-{{.Chart.Version}}"
annotations:
# This is what defines this resource as a hook. Without this line, the
# job is considered part of the release.
"helm.sh/hook": pre-upgrade
"helm.sh/hook-weight": "0"
"helm.sh/hook-delete-policy": hook-succeeded
spec:
template:
metadata:
name: "{{.Release.Name}}"
labels:
heritage: {{.Release.Service | quote }}
release: {{.Release.Name | quote }}
chart: "{{.Chart.Name}}-{{.Chart.Version}}"
spec:
restartPolicy: Never
securityContext:
# Because we are running as non root user and group id/User id of the flink user is 1000/1000.
fsGroup: {{ .Values.spec.securityContext.fsGroup }}
runAsNonRoot: {{ .Values.spec.securityContext.runAsNonRootFlag }}
runAsUser: {{ .Values.spec.securityContext.runAsUser }}
containers:
- name: pre-upgrade-job
image: {{ .Values.registry }}/{{ .Values.imageRepo }}:{{ .Values.imageTag }}
imagePullPolicy: {{ .Values.imagePullPolicy }}
# Got error /bin/sleep: invalid time interval 'lcm_hook'
args:
- lcm_hook
env:
# Need to add this env variable so that the custom flink conf values will be written to $FLINK_HOME/conf.
# This is needed for the hook scripts to connect to the Flink JobManager
- name: FLINK_KUBE_CONFIGMAP_PATH
value: {{ .Values.spec.config.mountPath }}
volumeMounts:
- name: {{ template "fullname" . }}-flink-config
mountPath: {{ .Values.spec.config.mountPath }}
- mountPath: {{ .Values.pvc.shared_storage_path }}/{{ template "fullname" . }}
name: shared-pvc
command: ["/bin/sh", "-c", "scripts/preUpgradeScript.sh","{{ .Values.pvc.shared_storage_path }}/{{ template "fullname" . }}"]
command: ["/bin/sleep","10"]
volumes:
- name: {{ template "fullname" . }}-flink-config
configMap:
name: {{ template "fullname" . }}-flink-config
- name: shared-pvc
persistentVolumeClaim:
claimName: {{ template "fullname" . }}-shared-pv-claim
Here, I need to pass an argument called "lcm_hooks" to my docker container. But when I do this, this argument seems to override the argument for my second command ["/bin/sleep","10"], and I get an error
/bin/sleep: invalid time interval 'lcm_hook'
during the upgrade phase. What is the right way to ensure that I am able to pass one argument to my container, and a totally different one to my bash command in the helm hook?
my docker container, called "lcm_hooks"
Your hook has one container which is not called lcm_hooks
, you called it pre-upgrade-job
. I'm mentioning this because perhaps you forgot to include a piece of code, or misunderstood how it works.
I need to pass an argument to my docker container
Your yaml specifies both command
and args
, therefore the image's original entrypoint
and cmd
will be completely ignored. If you want to "pass argument to container" you should omit the command
from the yaml and override the args
only.
second command
Your container spec does specify two commands, which means only the latter will execute. If you want to execute both of them you should chain them.
What is the right way to ensure that I am able to pass one argument to my container, and a totally different one to my bash command in the helm hook
You separate the hook container from the actual container you wanted to deploy using Helm.
I recommend the you review the container spec, and Helm hooks docs, that might clarify things: