What will happen to evicted pods in kubernetes?

9/26/2017

I just saw some of my pods got evicted by kubernetes. What will happen to them? just hanging around like that or I have to delete them manually?

-- reachlin
kubernetes

11 Answers

7/30/2018

Here is the 'official' guide for how to hard code the threshold(if you do not want to see too many evicted pods): kube-controll-manager

But a known problem is how to have kube-controll-manager installed...

-- tikael
Source: StackOverflow

3/8/2018

A quick workaround I use, is to delete all evicted pods manually after an incident. You can use this command:

kubectl get pods --all-namespaces -o json | jq '.items[] | select(.status.reason!=null) | select(.status.reason | contains("Evicted")) | "kubectl delete pods \(.metadata.name) -n \(.metadata.namespace)"' | xargs -n 1 bash -c
-- Kalvin
Source: StackOverflow

3/25/2019

To delete pods in Failed state in namespace default

kubectl -n default delete pods --field-selector=status.phase=Failed
-- ticapix
Source: StackOverflow

2/12/2019

Evicted pods should be manually deleted. You can use following command to delete all pods in Error state.

kubectl get pods --all-namespaces --field-selector 'status.phase==Failed' -o json | kubectl delete -f -
-- Hansika Madushan Weerasena
Source: StackOverflow

9/26/2017

Depending on if a soft or hard eviction threshold that has been met, the Containers in the Pod will be terminated with or without grace period, the PodPhase will be marked as Failed and the Pod deleted. If your Application runs as part of e.g. a Deployment, there will be another Pod created and scheduled by Kubernetes - probably on another Node not exceeding its eviction thresholds.

Be aware that eviction does not necessarily have to be caused by thresholds but can also be invoked via kubectl drain to empty a node or manually via the Kubernetes API.

-- Simon Tesar
Source: StackOverflow

3/11/2019

In case you have pods with a Completed status that you want to keep around:

kubectl get pods --all-namespaces --field-selector 'status.phase==Failed' -o json | kubectl delete -f -
-- mefix
Source: StackOverflow

7/30/2019

Just in the case someone wants to automatically delete all evicted pods for all namespaces:

  • Powershell
    Foreach( $x in (kubectl get po --all-namespaces --field-selector=status.phase=Failed --no-headers -o custom-columns=:metadata.name)) {kubectl delete po $x --all-namespaces }
  • Bash
kubectl get po --all-namespaces --field-selector=status.phase=Failed --no-headers -o custom-columns=:metadata.name | xargs kubectl delete po --all-namespaces
-- LucasPC
Source: StackOverflow

3/19/2018

OpenShift equivalent of Kalvin's command to delete all 'Evicted' pods:

eval "$(oc get pods --all-namespaces -o json | jq -r '.items[] | select(.status.phase == "Failed" and .status.reason == "Evicted") | "oc delete pod --namespace " + .metadata.namespace + " " + .metadata.name')"
-- ffghfgh
Source: StackOverflow

6/14/2019

Kube-controller-manager exists by default with a working K8s installation. It appears that the default is a max of 12500 terminated pods before GC kicks in.

Directly from the K8s documentation: https://kubernetes.io/docs/reference/command-line-tools-reference/kube-controller-manager/#kube-controller-manager

--terminated-pod-gc-threshold int32     Default: 12500
Number of terminated pods that can exist before the terminated pod garbage collector starts deleting terminated pods. If <= 0, the terminated pod garbage collector is disabled.

-- Stefano Pirrello
Source: StackOverflow

10/11/2019

One more bash command to delete evicted pods

kubectl get pods | grep Evicted | awk '{print $1}' | xargs kubectl delete pod
-- Roman Marusyk
Source: StackOverflow

12/20/2019

below command will get all evicted pods from the default namespace and delete them

kubectl get pods | grep Evicted | awk '{print$1}' | xargs -I {} kubectl delete pods/{}

-- bhavin
Source: StackOverflow