Kubernetes pod/deployment while passing args to container?

1/18/2019

I'm new to docker/k8s world... I was asked if I could deploy a container using args to modify the behavior (typically if the app is working in "master" or "slave" version), which I did. Maybe not the optimal solution but it works:

This is a simple test to verify. I made a custom image with a script inside: role.sh:

#!/bin/sh
ROLE=$1
echo "You are running "$ROLE" version of your app"

Dockerfile:

FROM centos:7.4.1708

COPY ./role.sh /usr/local/bin
RUN chmod a+x /usr/local/bin/role.sh
ENV ROLE=""
ARG ROLE

ENTRYPOINT ["role.sh"]
CMD ["${ROLE}"]

If I start this container with docker using the following command:

docker run -dit --name test docker.local:5000/test master

I end up with the following log, which is exactly what I am looking for:

You are running master version of your app

Now I want to have the same behavior on k8s, using a yaml file. I tried several ways but none worked.

YAML file:

apiVersion: v1
kind: Pod
metadata:
  name: master-pod
  labels:
     app: test-master
spec:
  containers:
    - name: test-master-container
      image: docker.local:5000/test
      command: ["role.sh"]
      args: ["master"]

I saw so many different ways to do this and I must say that I still don't get the difference between ARG and ENV.

I also tried with

 - name: test-master-container
   image: docker.local:5000/test
   env:
     - name: ROLE
       value: master

and

 - name: test-master-container
   image: docker.local:5000/test    
   args:
     - master

but none of these worked, my pods are always in CrashLoopBackOff state.. Thanks in advance for your help!

-- IsKor
args
docker
kubernetes
yaml

2 Answers

1/18/2019

To answer your specific situation, neither your ARG or your ENV seem to have an effect given the way you've declared them.

Your workflow would be on the lines of:

  1. write your Dockerfile (as you did, ok)
  2. build your container (you haven't provided the build command you used but given the declaration of your ARG, I assume you had to pass a value there)
  3. run your container (either docker run or in a kubernetes pod/deployment/etc)

Your ENV ROLE="" means that during build you should have an empty variable $ROLE that you can use throughout the Dockerfile and it will be available under the same name in the environment of the running container (presumably as an empty string).

Your ARG ROLE means you need to pass a ROLE to your docker build command, which will be available throughout the Dockerfile during the build, presumably overwriting your previously declared ENV, but having no effect beyond the build process.

As far as your running script is concerned, the only ROLE that matters is ROLE=$1, eg. the variable $ROLE which takes the value of the first argument. This means that it's pointless to specify an env RULE in your kubernetes yml because when your script runs, it will overwrite RULE with the first argument of your script, even when there's none (resulting in an empty value).

This spec looks correct and don't forget you can replace args: ["master"] by something like args: ["$(ROLE)"] (eg: it will expect a ROLE env var to be set on the machine executing your kubectl.

apiVersion: v1
kind: Pod
metadata:
  name: master-pod
  labels:
     app: test-master
spec:
  containers:
    - name: test-master-container
      image: docker.local:5000/test
      command: ["role.sh"]
      args: ["master"]
-- Andrei Dascalu
Source: StackOverflow

1/18/2019

In terms of specific fields:

  • Kubernetes's command: matches Docker's "entrypoint" concept, and whatever is specified here is run as the main process of the container. You don't need to specify a command: in a pod spec if your Dockerfile has a correct ENTRYPOINT already.
  • Kubernetes's args: matches Docker's "command" concept, and whatever is specified here is passed as command-line arguments to the entrypoint.
  • Environment variables in both Docker and Kubernetes have their usual Unix semantics.
  • Dockerfile ARG specifies a build-time configuration setting for an image. The expansion rules and interaction with environment variables are a little odd. In my experience this has a couple of useful use cases ("which JVM version do I actually want to build against?"), but every container built from an image will have the same inherited ARG value; it's not a good mechanism for run-time configuration.
  • For various things that could be set in either the Dockerfile or at runtime (ENV variables, EXPOSEd ports, a default CMD, especially VOLUME) there's no particular need to "declare" them in the Dockerfile to be able to set them at run time.

There are a couple of more-or-less equivalent ways to do what you're describing. (I will use docker run syntax for the sake of compactness.) Probably the most flexible way is to have ROLE set as an environment variable; when you run the entrypoint script you can assume $ROLE has a value, but it's worth checking.

#!/bin/sh
# --> I expect $ROLE to be set
# --> Pass some command to run as additional arguments
if [ -z "$ROLE" ]; then
  echo "Please set a ROLE environment variable" >&2
  exit 1
fi
echo "You are running $ROLE version of your app"
exec "$@"
docker run --rm -e ROLE=some_role docker.local:5000/test /bin/true

In this case you can specify a default ROLE in the Dockerfile if you want to.

FROM centos:7.4.1708
COPY ./role.sh /usr/local/bin
RUN chmod a+x /usr/local/bin/role.sh
ENV ROLE="default_role"
ENTRYPOINT ["role.sh"]

A second path is to take the role as a command-line parameter:

#!/bin/sh
# --> pass a role name, then a command, as parameters
ROLE="$1"
if [ -z "$ROLE" ]; then
  echo "Please pass a role as a command-line option" >&2
  exit 1
fi
echo "You are running $ROLE version of your app"
shift        # drops first parameter
export ROLE  # makes it an environment variable
exec "$@"
docker run --rm docker.local:5000/test some_role /bin/true

I would probably prefer the environment-variable path both for it being a little easier to supply multiple unrelated options and to not mix "settings" and "the command" in the "command" part of the Docker invocation.

As to why your pod is "crashing": Kubernetes generally expects pods to be long-running, so if you write a container that just prints something and exits, Kubernetes will restart it, and when it doesn't stay up, it will always wind up in CrashLoopBackOff state. For what you're trying to do right now, don't worry about it and look at the kubectl logs of the pod. Consider setting the pod spec's restart policy if this bothers you.

-- David Maze
Source: StackOverflow