I read in a book written by Helm creators the following fact about the --force option :
Sometimes, though, Helm users want to make sure that the pods are restarted. That’s where the --force flag comes in. Instead of modifying the Deployment (or similar object), it will delete and re-create it. This forces Kubernetes to delete the old pods and create new ones.
What I understand from that is, if I install a chart and then I change the number of replicas (=number of pods) then I upgrade the chart, it should recreate all the pods. This is not what happens in my case and I wanted to understand what I am missing here.
Let's take a hypothetical minimal Deployment (many required details omitted):
spec:
replicas: 3
template:
spec:
containers:
- image: abc:123
and you change this to only increase the replica count
spec:
replicas: 5 # <-- this is the only change
template:
spec:
containers:
- image: abc:123
The Kubernetes Deployment controller looks at this change and says "I already have 3 Pods running abc:123
; if I leave those alone, and start 2 more, then I will have 5, and the system will look like what the Deployment spec requests". So absent any change to the embedded Pod spec, the existing Pods will be left alone and the cluster will just scale up.
deployment-12345-aaaaa deployment-12345-aaaaa
deployment-12345-bbbbb deployment-12345-bbbbb
deployment-12345-ccccc ---> deployment-12345-ccccc
deployment-12345-ddddd
deployment-12345-eeeee
(replicas: 3) (replicas: 5)
Usually this is fine, since you're running the same image version and the same code. If you do need to forcibly restart things, I'd suggest using kubectl rollout restart deployment/its-name
rather than trying to convince Helm to do it.