What can cause pod in a deployment to mount a volume

6/7/2020

First what happened is We updated a configmap (key changed), then updated deployment to use the new key. Both were successful. After it finished, we checked the events, found out that there was a volume mounting error because of referring to the old key.

Below is how I investigated the error and why. First I thought since the error was because of referring the old key, it must have been a pod crash after I updated the configmap but before I updated the deployment, because volume mounting only happens when pod starting, which now I'm not so sure.

The I checked the events again, there was no crash event.

My question is Is there anything else other than crash that causes volume to mount? If there's not, what could be the possible reason?

-- user1149293
configmap
kubernetes
volume

1 Answer

6/7/2020

https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#mounted-configmaps-are-updated-automatically

When a ConfigMap already being consumed in a volume is updated, projected keys are eventually updated as well. Kubelet is checking whether the mounted ConfigMap is fresh on every periodic sync. However, it is using its local ttl-based cache for getting the current value of the ConfigMap. As a result, the total delay from the moment when the ConfigMap is updated to the moment when new keys are projected to the pod can be as long as kubelet sync period (1 minute by default) + ttl of ConfigMaps cache (1 minute by default) in kubelet. You can trigger an immediate refresh by updating one of the pod’s annotations.

-- user1149293
Source: StackOverflow