configmap change doesn't reflect automatically on respective pods

7/9/2019
apiVersion: apps/v1 # for versions before 1.8.0 use apps/v1beta1
    kind: Deployment
    metadata:
      name: consoleservice1
    spec:
      selector:
        matchLabels:
          app: consoleservice1
      replicas: 3 # tells deployment to run 3 pods matching the template
      strategy:
        type: RollingUpdate
        rollingUpdate:
          maxSurge: 1
          maxUnavailable: 1
      minReadySeconds: 5
      template: # create pods using pod definition in this template
        metadata:
          labels:
            app: consoleservice1
        spec:
          containers:
          - name: consoleservice
            image: chintamani/insightvu:ms-console1
            readinessProbe:
              httpGet:
                path: /
                port: 8385
              initialDelaySeconds: 5
              periodSeconds: 5
              successThreshold: 1
            ports:
            - containerPort: 8384
            imagePullPolicy: Always
            volumeMounts:
              - mountPath: /deploy/config
                name: config
          volumes:
            - name: config
              configMap:
                name: console-config

For creating configmap I am using this command:

kubectl create configmap console-config --from-file=deploy/config

While changing in configmap it doesn't reflect automatically, every time I have to restart the pod. How can I do it automatically?

-- Chintamani
kubernetes
kubernetes-pod

2 Answers

9/18/2019

Pod and configmap are completely separate in Kubernetes and pods don't automatically restart itself if there is a configmap change.

There are few alternatives to achieve this.

  1. Use wave, it's a Kubernetes controller which look for specific annotation and update the deployment if there is any change in configmap https://github.com/pusher/wave

  2. Use of https://github.com/stakater/Reloader, reloader can watch configmap changes and can update the pod to pick the new configuration.

        kind: Deployment
        metadata:
          annotations:
            reloader.stakater.com/auto: "true"
        spec:
          template:
            metadata:
  3. You can add a customize configHash annotation in deployment and in CI/CD or while deploying the application use yq to replace that value with hash of configmap, so in case of any change in configmap. Kubernetes will detect the change in annotation of deployment and reload the pods with new configuration.

yq w --inplace deployment.yaml spec.template.metadata.annotations.configHash $(kubectl get cm/configmap -oyaml | sha256sum)

        apiVersion: apps/v1 # for versions before 1.8.0 use apps/v1beta1
        kind: Deployment
        metadata:
          name: application
        spec:
          selector:
            matchLabels:
              app: consoleservice1
          replicas: 3              
          template:
            metadata:
              labels:
                app: consoleservice1
              annotations:
                configHash: ""

Reference: here

-- Mahattam
Source: StackOverflow

7/10/2019

thank you guys .Able to fix it ,I am using reloader to reflect on pods if any changes done inside kubectl apply -f https://raw.githubusercontent.com/stakater/Reloader/master/deployments/kubernetes/reloader.yaml

then add the annotation inside your deployment.yml file .

apiVersion: apps/v1 # for versions before 1.8.0 use apps/v1beta1
kind: Deployment
metadata:
  name: consoleservice1
  annotations:
    configmap.reloader.stakater.com/reload: "console-config"

It will restart your pods gradually .

-- Chintamani
Source: StackOverflow