Doing kubectl rollout undo causing current and previous replicasets persists

3/17/2021

I am using kubectl for kubernetes based deployments. I created a deployment which gone to crashloopbackoff state. Then for correcting this issue, rolling update is done. But the deployment failed due to OOM. Then if i do a rollout undo, i am seeing current and previous replicasets with the respective pods in failed state. I only need the previous replicasets with running pods. Any points what might have gone wrong. Thank you so much.

-- saba_88
kubectl
kubernetes
rollout

0 Answers