How to update the ConfigMap for the workloads running on GKE cluster using Helm?

7/31/2019

I have a GKE cluster running with Prometheus and Prometheus alert manager as a stateful set. Each of the stateful set is running the pods that collects the metrics. There are two ConfigMaps (having alerts.yaml, rules.yaml and premoetheus.yaml) for Prometheus and (alertmanager.yml) for the alert manager.

Now I have a new task of sending the alerts to Slack. I have updated the alerts.yaml, rules.yaml and alertmanager.yml respectively so that alerts will be sent to Slack.

I need to update all the .yaml files for Prometheus and alert manager on the cluster running the workloads and pods in GKE using Helm. Can someone please let me know how can I achieve that using Helm?

-- tank
google-kubernetes-engine
kubectl
kubernetes-helm
prometheus

1 Answer

8/3/2019

I also recommend you to use helm to manage your services but you can update a configMap without needing to use helm at all.

First, you can change the data inside a confiMap by using apply.

kubectl apply -f fileName.yaml

But this didn't update the configMap information loaded inside your pods, you need to restart your pods to see the changes, for that you have a some different options:

Option 1

Manual operation.

Delete the pods, it makes the deployment controller force creating new ones to have the replica count you have on your deployment definition and the pods take the new configMap when it starts.

kubectl delete pod <pod-name>

On that solution, you go one by one deleting the pods.

Scale down/up a deployment, you can manually scale down to 0 your deployment and up again to create new pods that use the new confirMap.

kubectl scale deployment <deployment-name> --replicas=0 && kubectl scale deployment <deployment-name> --replicas=2

With that solution, you don't need to go deleting one by one.

Option 2

You can use an env var definition on the deployment to force a rolling update, this variable is not used inside the pod but you can use ti by editing the deployment to foce the rolling update.

apiVersion: apps/v1beta1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    run: helloworld
  name: helloworld
spec:
  replicas: 1
  selector:
    matchLabels:
      run: helloworld
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        run: helloworld
    spec:
      containers:
      - image: helloworld
        name: helloworld
        env:
        - name: RELOAD_VAR
          value: TAG_VAR0

Every time you change the RELOAD_VAR value the deployment will perform a rolling update creating new pods and loading the new confirMap

Option 3

On a more automated way you can use a special kind of controller that uses annotations to watch for changes on a configMap and reloads the pods that have the right annotations associated with that confirMap.

You can review https://github.com/stakater/Reloader, you simply need to deploy it on your cluster and put the annotation on your deployment.

kind: Deployment
metadata:
  annotations:
    configmap.reloader.stakater.com/reload: "foo-configmap"
spec:
  template:
    metadata:

Every time you change your configMap no matters how you do it the controller detects the changes and reloads your pods autoamtically.

-- wolmi
Source: StackOverflow