RollingUpdate for stateful set doesn't restart pods and changes from updated ConfigMaps doesnot reflect

8/5/2019

I am using Prometheus and Prometheus Alertmanager to send alerts.

I already have Kubernetes stateful set running on GKE. I updated the ConfigMaps for Prometheus and Prometheus AlertManager and did RollingUpdate for the stateful set, but pods did not restarted and it seems that it is still using old ConfigMaps.

I used the command for updating the ConfigMaps

kubectl create configmap prometheus-alertmanager-config --from-file alertmanager.yml -n mynamespace -o yaml --dry-run | kubectl replace -f - Similarily I updated for Prometheus as well.

For the RollingUpdate I used the below command:

kubectl patch statefulset prometheus-alertmanager -p '{"spec":{"updateStrategy":{"type":"RollingUpdate"}}}' -n mynamespace

Also when I did rollingUpdate it showed

statefulset.apps/prometheus-alertmanager patched (no change)

I don't know what is happening, is it not possible to make pods in stateful set adapt to the updated ConfigMaps by doing RollingUpdate? or I am missing something here?

-- tank
configmap
kubernetes-statefulset
prometheus
prometheus-alertmanager

1 Answer

8/5/2019

The Prometheus pods have to be restarted in order to pick up an updated ConfigMap or Secret.

A rolling update will not always restart the pods (only if a direct configuration property of the pod is changed. For example - image tag.)

kubectl v1.15 now provides a rollout restart sub-command that allows you to restart Pods in a Deployment - taking into account your surge/unavailability config - and thus have them pick up changes to a referenced ConfigMap, Secret or similar. It’s worth noting that you can use this with clusters older than v1.15, as it’s implemented in the client.

Example usage: kubectl rollout restart deployment/prometheus to restart a specific deployment. Easy as that!

More info - here.

-- cecunami
Source: StackOverflow