I have a deployment for which the env variables for pod are set via config map.
envFrom:
- configMapRef:
name: map
My config map will look like this
apiVersion: v1
data:
HI: HELLO
PASSWORD: PWD
USERNAME: USER
kind: ConfigMap
metadata:
name: map
all the pods have these env variables set from map. Now If I change the config map file and apply - kubectl apply -f map.yaml
i get the confirmation that map is configured
. However it does not trigger new pods creation with updated env variables.
Interestingly this one works
kubectl set env deploy/mydeploy PASSWORD=NEWPWD
But not this one
kubectl set env deploy/mydeploy --from=cm/map
But I am looking for the way for new pods creation with updated env variables via config map!
The simple answer is NO.
In case you are not using helm & looking for a hack, after updating the configMap, just use an dummy env variable - keep updating the value just to trigger the rolling update.
kubectl set env deploy/mydeploy DUMMY_ENV_FOR_ROLLING_UPDATE=dummyval
Interestingly this one works
kubectl set env deploy/mydeploy PASSWORD=NEWPWD
But not this one
kubectl set env deploy/mydeploy --from=cm/map
This is expected behavior. Your pod manifest hasn't changed in second command (when you use the cm
), that's why Kubernetes not recreating it.
There are several ways to deal with that. Basically what you can do is artificially change Pod manifest every time ConfigMap changes, e.g. adding annotation to the Pod with sha256sum of ConfigMap content. This is actually what Helm suggests you do. If you are using Helm it can be done as:
kind: Deployment
spec:
template:
metadata:
annotations:
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
[...]
Just make sure you add annotation to Pod (template) object, not the Deployment itself.