Dont delete pods in rolling back a deployment

6/19/2019

I would like to perform rolling back a deployment in my environment.

Command:

kubectl rollout undo deployment/foo

Steps which are perform:

  • create pods with old configurations
  • delete old pods

Is there a way to not perform last step - for example - developer would like to check why init command fail and debug.

I didn't find information about that in documentation.

-- Mateusz
kubernetes

1 Answer

6/19/2019

Yes it is possible, before doing rollout, first you need to remove labels (corresponding to replica-set controlling that pod) from unhealthy pod. This way pod won't belong anymore to the deployment and even if you do rollout, it will still be there. Example:

$kubectl get deployment
NAME      READY   UP-TO-DATE   AVAILABLE   AGE 
sleeper   1/1     1            1           47h 
$kubectl get pod --show-labels
NAME                      READY   STATUS    RESTARTS   AGE     LABELS
sleeper-d75b55fc9-87k5k   1/1     Running   0          5m46s   pod-template-hash=d75b55fc9,run=sleeper
$kubectl label pod sleeper-d75b55fc9-87k5k  pod-template-hash- run-
pod/sleeper-d75b55fc9-87k5k labeled
$kubectl get pod --show-labels
NAME                      READY   STATUS    RESTARTS   AGE     LABELS
sleeper-d75b55fc9-87k5k   1/1     Running   0          6m34s   <none>
sleeper-d75b55fc9-swkj9   1/1     Running   0          3s      pod-template-hash=d75b55fc9,run=sleeper

So what happens here, we have a pod sleeper-d75b55fc9-87k5k which belongs to sleeper deployment, we remove all labels from it, deployment detects that pod "has gone" so it creates a new one sleeper-d75b55fc9-swkj9, but the old one is still there and ready for debugging. Only pod sleeper-d75b55fc9-swkj9 will be affected by rollout.

-- Adam Otto
Source: StackOverflow