For example, I made a spark:2.4.3 docker image, and I set an ENV SPARK_HOME in the Dockerfile.
When I used SPARK_HOME
in the yaml file (see code below), I got an error:
${SPARK_HOME}/sbin/start-master.sh: no such file or directory
# ...
containers:
- name: spark-master
image: spark:2.4.3
command:
- ${SPARK_HOME}/sbin/start-master.sh
- --host 0.0.0.0
- --port $(SPARK_MASTER_PORT)
- --webui-port $(SPARK_MASTER_WEBUI_PORT)
- "&&"
- tail -f $(SPARK_HOME)/logs/*
env:
- name: SPARK_MASTER_PORT
valueFrom:
configMapKeyRef:
name: spark-config
key: spark_master_port
- name: SPARK_MASTER_WEBUI_PORT
valueFrom:
configMapKeyRef:
name: spark-config
key: spark_master_webui_port
# ...
Does it mean that I have to use absolute path?
Is there any way to use environment variables in a Kubernetes YAML file?
It doesn't work that way.
When you define a ENV
in the Dockerfile, you are declaring the variable for use inside the container.
When you set a variable in the .yaml
file, such variable is meant to be defined at the host level.
If you need to use an environment variable to run your program, use a bash script as entrypoint:
#!/bin/bash
${SPARK_HOME}/sbin/start-master.sh \
--host 0.0.0.0
--port $(SPARK_MASTER_PORT) \
--webui-port $(SPARK_MASTER_WEBUI_PORT) \
&& tail -f $(SPARK_HOME)/logs/*
As this bash script is executed inside the container, it will make use of those environment variables.