I have several celery workers running in minikube, and they are working on tasks passed using the rabbitMQ. Recently I updated some of the code for the celery workers and changed the image. When I do helm upgrade release_name chart_path
, all the existing worker pods are terminated and all the unfinished tasks are abandoned. I was wondering if there is a way to upgrade the helm chart without terminating the old pods?
helm install -n new_release_name chart_path
will give me a new set of celery workers; however, due to some limitations, I am not allowed to deploy pods in a new release.helm upgrade release_name chart_path --set deployment.name=worker2
because I thought having a new deployment name will stop helm from deleting the old pods, but this won't work either.This is just how Kubernetes Deployments work. What you should do is to fix your Celery worker image so that it waits to try and complete whatever tasks are pending before actually shutting down. This should already probably be the case unless you did something funky such that the SIGTERM isn't making it to Celery? See https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods for details.