Use kubectl to delete a node that has running pods on it

9/30/2015

We are using Heat + Kubernetes (V0.19) to manage our apps. When do rolling update, sometimes container staring will always fail on a node but kubelet on the node will always retry but always fail. So the updating will hang there which is not the behavior we expected.

I found that using "kubectl delete node" to remove the node can avoid pods scheduled to that node. But in our env, the node to be deleted may have running pods on it.

So my question is: After using "kubectl delete node" to remove the node, will the pods on that node still worked correctly ?

-- Chnos
kubernetes

1 Answer

9/30/2015

If you just want to cancel the rolling update, remove the failed pods and try again later, I have found that it is best to stop the update loop with CTRL+c and then delete the replication controller corresponding to the new app that is failing.

    ^C
    kubectl delete replicationcontrollers your-app-v1.2.3   
-- esecules
Source: StackOverflow