Kubernetes zero down time rolling update

10/13/2015

I have a replication controller containing a pod (with 1 replica) that takes ~10m to start. As my application grows over time, that duration is going to increase.

My problem is when I deploy a new version, the prior one is killed, then the new one can start.

Is it possible to make kubernetes not kill the old pod during a rolling update, until the new pod is running ?

It's okay for me to have multiple replicas if it is necessary, but that did not fix the issue.

The replication controller has livenessProbe and readinessProbe set correctly.

-- Azr
kubernetes

1 Answer

10/13/2015

I kept searching, and it's not possible right now ( 13 Oct 2015 ), but I made an issue you can follow : https://github.com/kubernetes/kubernetes/issues/15557.

-- Azr
Source: StackOverflow