I just saw some of my pods got evicted by kubernetes. What will happen to them? just hanging around like that or I have to delete them manually?
Here is the 'official' guide for how to hard code the threshold(if you do not want to see too many evicted pods): kube-controll-manager
But a known problem is how to have kube-controll-manager installed...
A quick workaround I use, is to delete all evicted pods manually after an incident. You can use this command:
kubectl get pods --all-namespaces -o json | jq '.items[] | select(.status.reason!=null) | select(.status.reason | contains("Evicted")) | "kubectl delete pods \(.metadata.name) -n \(.metadata.namespace)"' | xargs -n 1 bash -c
To delete pods in Failed state in namespace default
kubectl -n default delete pods --field-selector=status.phase=Failed
Evicted pods should be manually deleted. You can use following command to delete all pods in Error
state.
kubectl get pods --all-namespaces --field-selector 'status.phase==Failed' -o json | kubectl delete -f -
Depending on if a soft or hard eviction threshold that has been met, the Containers in the Pod will be terminated with or without grace period, the PodPhase
will be marked as Failed
and the Pod deleted. If your Application runs as part of e.g. a Deployment, there will be another Pod created and scheduled by Kubernetes - probably on another Node not exceeding its eviction thresholds.
Be aware that eviction does not necessarily have to be caused by thresholds but can also be invoked via kubectl drain
to empty a node or manually via the Kubernetes API.
In case you have pods with a Completed
status that you want to keep around:
kubectl get pods --all-namespaces --field-selector 'status.phase==Failed' -o json | kubectl delete -f -
Just in the case someone wants to automatically delete all evicted pods for all namespaces:
Foreach( $x in (kubectl get po --all-namespaces --field-selector=status.phase=Failed --no-headers -o custom-columns=:metadata.name)) {kubectl delete po $x --all-namespaces }
kubectl get po --all-namespaces --field-selector=status.phase=Failed --no-headers -o custom-columns=:metadata.name | xargs kubectl delete po --all-namespaces
OpenShift equivalent of Kalvin's command to delete all 'Evicted' pods:
eval "$(oc get pods --all-namespaces -o json | jq -r '.items[] | select(.status.phase == "Failed" and .status.reason == "Evicted") | "oc delete pod --namespace " + .metadata.namespace + " " + .metadata.name')"
Kube-controller-manager
exists by default with a working K8s installation. It appears that the default is a max of 12500 terminated pods before GC kicks in.
Directly from the K8s documentation: https://kubernetes.io/docs/reference/command-line-tools-reference/kube-controller-manager/#kube-controller-manager
--terminated-pod-gc-threshold int32 Default: 12500
Number of terminated pods that can exist before the terminated pod garbage collector starts deleting terminated pods. If <= 0, the terminated pod garbage collector is disabled.
One more bash command to delete evicted pods
kubectl get pods | grep Evicted | awk '{print $1}' | xargs kubectl delete pod
below command will get all evicted pods from the default namespace and delete them
kubectl get pods | grep Evicted | awk '{print$1}' | xargs -I {} kubectl delete pods/{}