It seems that kubernetes tends to delete the orphan pods automatically when the node is NotReady
for some time, if the terminated pod amount is up to the threshold.
The imps might be the Garbage Collector, or the Controller:
In general, Pods do not disappear until someone destroys them. This might be a human or a controller. The only exception to this rule is that Pods with a phase of Succeeded or Failed for more than some duration (determined by the master) will expire and be automatically destroyed.
After appending the --terminated-pod-gc-threshold=-1
to the controller manager, the pods will be retained for a period of time after the node is NotReady
, but it was removed at last.
Then I'm trying to config the Garbage Collector. Actually, there's a Deployment
in my cluster, with the ownerReferences
defined as following, but how to specify the owner for bare pods, in order to prevent pods deletion when the nodes are NotReady
?
ownerReferences:
- apiVersion: extensions/v1beta1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: cluster-autoscaler-8253924875
uid: 930abce3-58c8-4738-b891-8653ae71c2d1