OpenShift (and probably k8s, too) updates a deployment's existing environment variables and creates new ones when they were changed in the respective DeploymentConfig
in a template file before applying it.
Is there a way to remove already existing environment variables if they are no longer specified in a template when you run oc apply
?
There is a way to achieve what you need and for that you need to patch your objects. You need to use the patch type merge-patch+json
and as a patch you need to supply a complete/desired list of env vars.
As an example lets consider this deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mydeployment
spec:
replicas: 2
selector:
matchLabels:
app: sample
template:
metadata:
labels:
app: sample
spec:
containers:
- name: patch-demo-ctr
image: gcr.io/google-samples/node-hello:1.0
env:
- name: VAR1
value: "Hello, I'm VAR1"
- name: VAR2
value: "Hey, VAR2 here. Don't kill me!"
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
mydeployment-db84d9bcc-jg8cb 1/1 Running 0 28s
mydeployment-db84d9bcc-mnf4s 1/1 Running 0 28s
$ kubectl exec -ti mydeployment-db84d9bcc-jg8cb -- env | grep VAR
VAR1=Hello, I'm VAR1
VAR2=Hey, VAR2 here. Don't kill me!
Now, to remove VAR2 we have to export our yaml deployment:
$ kubectl get deployments mydeployment -o yaml --export > patch-file.yaml
Edit this file and remove VAR2 entry:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
spec:
progressDeadlineSeconds: 600
replicas: 2
revisionHistoryLimit: 10
selector:
matchLabels:
app: sample
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: sample
spec:
containers:
- env:
- name: VAR1
value: Hello, I'm VAR1
image: gcr.io/google-samples/node-hello:1.0
imagePullPolicy: IfNotPresent
name: patch-demo-ctr
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status: {}
Now we need to patch it with the following command:
$ kubectl patch deployments mydeployment --type merge --patch "$(cat patch-file.yaml)"
deployment.extensions/mydeployment patched
Great, If we check our pods we can see that we have 2 new pods and the old ones are being terminated:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
mydeployment-8484d6887-dvdnc 1/1 Running 0 5s
mydeployment-8484d6887-xzkhb 1/1 Running 0 3s
mydeployment-db84d9bcc-jg8cb 1/1 Terminating 0 5m33s
mydeployment-db84d9bcc-mnf4s 1/1 Terminating 0 5m33s
Now, if we check the new pods, we can see they have only VAR1:
$ kubectl exec -ti mydeployment-8484d6887-dvdnc -- env | grep VAR
VAR1=Hello, I'm VAR1