Kubernetes configMap versions why?

11/22/2021

I'm confused about why we use configMap versions. I'm seeing multiple versions of confimap in my k8s cluster that is attached to a deployment/sts. I was expecting if I apply some changes in my yaml it will get reflected in all of the configMap versions, but that is not happening. Can someone help with this.

I don't have any subdir in configmap.

Do you know much time for reflecting these changes in the mounted volumes? or what I'm missing here

example configmap output

NAME                          DATA   AGE
ca-bundles                    4      3d17h
c-rules-alerts            1      3d17h
c-rules-alerts-v000       1      3d16h
c-rules-alerts-v001       1      50m
c-rules-metrics           1      3d17h
c-rules-metrics-v000      1      3d16h
c-rules-metrics-v001      1      50m
c-alertmanager        1      3d17h
c-alertmanager-v000   1      3d16h
c-server              3      3d17h
c-server-v000         3      3d16h

here is the mount config

          volumeMounts:

            - name: config-metric-volume
              mountPath: /oracle_exporter
      volumes:
        - name: data-volume
          emptyDir:
            sizeLimit: 2Gi
        - name: config-metric-volume
          configMap:
            name: chron-rules-metrics
-- Kiran KH
helm3
kubernetes
kubernetes-helm
kubernetes-pod

1 Answer

11/23/2021

Kubernetes provides a way to store rollout information for some resources by default, one of those is deployments.

You can perform some commands providing a --record flag, which will record the current and new state of the deployment, allowing you to perform a rollback

See this answer for more details

Other resources, don't have that feature and tools like helm and kapp create versions of the resources to allow you to rollback the deployment/statefulset/etc along with all of its related resources like secrets, configmaps, ingresses, etc.

Some tools store that information in annotations, others create a naming duplicate the current resource with a suffix following a convention.


So, with that out of the way, we can talk about seeing the changes reflected on your deployments.

By default, again, if you change anything in a deployment(sts, ds, etc) spec that impacts the pods, kuberntes will trigger a rollout and will recreate all the pods to reflect the new spec.

That doesn't happen when you update secrets and config maps that are mapped to pods. There are arguments in favor and against that behaviour, some think that is useful, some think that this might lead to some huge chain reactions in the cluster.

Regardless of where this discussion ends, this is the behaviour today.

To see config map or secrets changes reflected on the pods, you must trigger a restart.

kubectl rollout restart deployment/my-awesome-app

-- Magus
Source: StackOverflow