When I do kubectl delete pod
or kubectl patch
, how can I ensure that the old pod doesn't actually delete itself until the replacement pod has been running for 2 minutes? (And if the replacement pod dies before the 2 minute mark, then don't even delete the old pod)
The reason is that my initiation takes about 2 minutes to pull some latest data and run calculations; after 2 minutes it'll reach a point where it'll either error out or continue running with the updated calculations.
I want to be able to occasionally delete the pod so that it'll restart and get new versions of stuff (because getting new versions is only done at the beginning of the code).
Is there a way I can do this without an init container? Because I'm concerned that it would be difficult to pass the calculated results from the init container to the main container
We need to tweak two parameters.
minReadySeconds
as 2 min. Or we can use readiness probe
instead of hardcoded 2min.maxSurge > 0 (default: 1) and maxUnavailable: 0
. This will bring the new pod(s) and only if it becomes ready, old pod(s) will be killed. This process continues for rest of the pods.Note: 0 <= maxSurge <= replicaCount
If you are using a Deployment, you can set minReadySeconds
in the spec to 120 (seconds). Kubernetes will not consider it actually ready and in service (and therefore spin down old pods) until the pod reports it has been ready for that long.