Consistency guarantees of Kubernetes ConfigMaps?

1/28/2020

Trying to decide where to store some critical sharding configs, and I have not yet found sufficient documentation around the reliability of Kube ConfigMaps to ease my mind.

Say I have a single-cluster kube pod spec that injects an environment variable with the value of a configmap entry at pod start (using configMapKeyRef). I have many pods running based on this spec.

  1. I kubectl edit the configmap entry and wait for the operation to succeed.
  2. I restart the pods.

Are these pods guaranteed to see the new configmap value? (Or, failing that, is there a window of time I'd need to wait before restarting the pods to ensure that they get the new value?)

Similarly, are all pods guaranteed to see a consistent value, assuming there are no configmap edits during the time they are all restarting?

-- David Grant
configmap
kubernetes

1 Answer

1/28/2020

Kubernetes is an eventual consistency system. It is recommended to create a new ConfigMap when you want to change the value.

Changing the data held by a live configMap in a cluster is considered bad practice. Deployments have no means to know that the configMaps they refer to have changed, so such updates have no effect.

Using Declarative config management with Kustomize, this is easier to do using configMapGenerator

The recommended way to change a deployment's configuration is to

  1. create a new configMap with a new name,
  2. patch the deployment, modifying the name value of the appropriate configMapKeyRef field.

Deployment

When using Kustomize for both Deployment and ConfigMap using the configMapGenerator the name of ConfigMap will be generated based on content and the references to the ConfigMap in Deployment will be updated with the generated name, so that a new rolling deployment will be triggered.

-- Jonas
Source: StackOverflow