What happens if a configMap(/secret) mounted as a volume in a running pod, is deleted on the master?

1/3/2019

Let's say I have a pod with a configMap (or secret) volume. ConfigMap (or secret) object is present during the pod's creation, but I delete the configMap(or secret) object on the master, while the pod is running. What is the expected behavior? Is it documented anywhere?

Is the running pod terminated? Are the configMap (or secret) files deleted and pod continues to run?

This is the documentation I could find about updates, doesn't mention anything about deletions.

When a ConfigMap already being consumed in a volume is updated, projected keys are eventually updated as well. Kubelet is checking whether the mounted ConfigMap is fresh on every periodic sync. However, it is using its local ttl-based cache for getting the current value of the ConfigMap. As a result, the total delay from the moment when the ConfigMap is updated to the moment when new keys are projected to the pod can be as long as kubelet sync period + ttl of ConfigMaps cache in kubelet.

-- yash desai
configmap
kubelet
kubernetes
kubernetes-secrets

1 Answer

1/4/2019

Nothing happens to your workloads running. Once they get scheduled by the kube-scheduler on the master(s) and then by the kubelet on the node(s), ConfigMaps, Secrets, etc get stored on the local filesystem of the node. The default is something like this:

# ConfigMaps
/var/lib/kubelet/pods/<pod-id>/volumes/kubernetes.io~configmap/configmapname/
# Secret
/var/lib/kubelet/pods/<pod-id>/volumes/kubernetes.io~secret/secret-token/

These actually end up being mounted somewhere in the container on a path that you specify on the pod spec.

When you delete the object in Kubernetes it actually gets deleted from its data store (etcd). Supposed that your pods need to be restarted for whatever reason, they will no be able to restart.

Short answer, nothing happens to your workloads running but if your pods need to be restarted they won't be able to restart.

-- Rico
Source: StackOverflow