Kubernetes helm waiting before killing the old pods during helm deployment

11/21/2019

I have a "big" micro-service (website) with 3 pods deployed with Helm Chart in production env, but when I deploy a new version of the Helm chart, during 40 seconds (time to start my big microservice) I have a problem with the website (503 Service Unavailable)

So, I look at a solution to tell to kubernetes do not kill the old pod before the complete start of the new version

I tried the --wait --timeout but it did not work for me.

My EKS version : "v1.14.6-eks-5047ed"

-- Inforedaster
kubernetes
kubernetes-helm
microservices

1 Answer

11/21/2019

Without more details about the Pods, I'd suggest:

Use Deployment (if not already) so that Pods are managed by a Replication Controller, which allows to do rolling updates, and that in combination with configured Startup Probe (if on k8s v1.16+) or Readiness Probe so that Kubernetes knows when the new Pods are ready to take on traffic (a Pod is considered ready when all of its Containers are ready).

-- apisim
Source: StackOverflow