I am kind of stuck with running a docker container as part of a kubernetes job and specifying runtime arguments in the job template.
My Dockerfile specifies an entrypoint and no CMD directive:
ENTRYPOINT ["python", "script.py"]
From what I understand, this means that when running the docker image and specifying arguments, the container will run using the entrypoint specified in the Dockerfile and pass the arguments to it. I can confirm that this is actually working, because running the container using docker does the trick:
docker run --rm image -e foo -b bar
In my case this will start script.py, which is using argument parser to parse named arguments, with the intended arguments.
The problem starts to arise when I am using a kubernetes job to do the same:
apiVersion: batch/v1
kind: Job
metadata:
name: pipeline
spec:
template:
spec:
containers:
- name: pipeline
image: test
args: ["-e", "foo", "-b", "bar"]
In the pod that gets deployed the correct entrypoint will be run, but the specified arguments vanish. I also tried specifying the arguments like this:
args: ["-e foo", "-b bar"]
But this didn't help either. I don't know why this is not working, because the documentation cleary states that: "If you supply only args for a Container, the default Entrypoint defined in the Docker image is run with the args that you supplied.". The default entrypoint is running, that is correct, but the arguments are lost between kubernetes and docker.
Does somebody know what I am doing wrong?
I actually got it working using the following yaml syntax:
args:
- "-e"
- "foo"
- "-b"
- "bar"
The array syntax that I used beforehand seems not to be working at all as everything was passed to the -e argument of my script like this:
-e " foo -b bar"
That's why the -b
argument was marked as missing even though the arguments were populated in the container.