How would an environment variable from an existing kubernetes deployment get added to a new deployment?

5/22/2019

I am building a mutating webhook that adds an environment variable to a container in a deployment that has a label flag. What I am seeing is that by the time the deployments AdmissionReview reaches my mutating webhook, that environment variable has already been set, although it is not present in the spec file that was applied using kubectl apply -f spec.yaml.

This issue only occurs when there is an existing deployment of the same name in the cluster, which has that environment variable set. This would imply to me that the settings from the existing deployment are somehow being incorporated into the new one. This doesn't make sense, since I should be able to remove an environment variable from a deployment by removing it from the spec file and then re-applying, which is not working in this case.

I have confirmed that no other mutating webhooks that we use would copy environment variable information from an existing deployment. What other process could be responsible for this phenomenon??

@matt

The env field itself isn't missing in the new deployment, just a single item in that list.

This is the env field of the existing deployment in the cluster:

- env:
  - name: APP_NAME
    value: app
  - name: J_HOST
    valueFrom:
      fieldRef:
        apiVersion: v1
        fieldPath: status.hostIP
  - name: MT_ENABLED
    value: "on"

This is the env field of the deployment spec:

- env:
  - name: APP_NAME
    value: app
  - name: J_HOST
    valueFrom:
      fieldRef:
        fieldPath: status.hosapp

It is the environment variable "MT_ENABLED" that is the problem field. It is never in the spec file. On the FIRST time it is deployed, it is added by the webhook. On the SECOND deploy, the webhook should update the value with a new value, but since it already exists in the api object received, it does not (which is the behavior we wanted, the issue is that it shouldn't have been in the api object as it is not in the base spec)

@matt I think you may have solved this for me. The merge semantics say that it uses the "last-applied-configuration" field to compare against to determine what fields have been deleted. That field looks like it is stored BEFORE the webhook has the opportunity to add the MT_ENABLED field. Thus, when the api server compares the new spec file (with no MT_ENABLED field) to the last-applied-configuration (with no MT_ENABLED field), it sees no differences, and thus decides not to alter the env field of the existing deployment.

This is my speculation at this point. I am going to try and run a test which adds the MT_ENABLED field to the last applied configuration and see the behavior.

UPDATE: That was indeed the issue. I updated my webhook to also add the MT_ENABLED field to the last-applied-configuration annotation on the deployment, and now kubernetes recognizes that the env list on the existing deployment is different than the one on the new deployment, so it updates it with the env list from the new deployment, and everything behaves correctly. Thanks!

-- Max Flanders
kubernetes

0 Answers