Helm upgrade custom microservice causes temporary downtime

3/11/2020

during deployment of new version of application sequentially 4 pods are terminated and replaced by newer ones; but for those ~10minutes the app is hitting other microservice is hitting older endpoints causing 502/404 errors - anyone know of a way to deploy 4 new pods, then drain traffic from old ones to new ones and after all connections to prev ver are terminated, then terminate the old pods ?

-- CptDolphin
kubernetes
kubernetes-helm

1 Answer

3/11/2020

This probably means you don't have a readiness probe set up? Because the default is already to only roll 25% of the pods at once. If you have a readiness probe, this will include waiting until the new pods are actually available and Ready but otherwise it only waits until they start.

-- coderanger
Source: StackOverflow