I have a bunch of pods in kubernetes which are completed (successfully or unsuccessfully) and I'd like to clean up the output of kubectl get pods. Here's what I see when I run kubectl get pods:
NAME                                           READY   STATUS             RESTARTS   AGE
intent-insights-aws-org-73-ingest-391c9384     0/1     ImagePullBackOff   0          8d
intent-postgres-f6dfcddcc-5qwl7                1/1     Running            0          23h
redis-scheduler-dev-master-0                   1/1     Running            0          10h
redis-scheduler-dev-metrics-85b45bbcc7-ch24g   1/1     Running            0          6d
redis-scheduler-dev-slave-74c7cbb557-dmvfg     1/1     Running            0          10h
redis-scheduler-dev-slave-74c7cbb557-jhqwx     1/1     Running            0          5d
scheduler-5f48b845b6-d5p4s                     2/2     Running            0          36m
snapshot-169-5af87b54                          0/1     Completed          0          20m
snapshot-169-8705f77c                          0/1     Completed          0          1h
snapshot-169-be6f4774                          0/1     Completed          0          1h
snapshot-169-ce9a8946                          0/1     Completed          0          1h
snapshot-169-d3099b06                          0/1     ImagePullBackOff   0          24m
snapshot-204-50714c88                          0/1     Completed          0          21m
snapshot-204-7c86df5a                          0/1     Completed          0          1h
snapshot-204-87f35e36                          0/1     ImagePullBackOff   0          26m
snapshot-204-b3a4c292                          0/1     Completed          0          1h
snapshot-204-c3d90db6                          0/1     Completed          0          1h
snapshot-245-3c9a7226                          0/1     ImagePullBackOff   0          28m
snapshot-245-45a907a0                          0/1     Completed          0          21m
snapshot-245-71911b06                          0/1     Completed          0          1h
snapshot-245-a8f5dd5e                          0/1     Completed          0          1h
snapshot-245-b9132236                          0/1     Completed          0          1h
snapshot-76-1e515338                           0/1     Completed          0          22m
snapshot-76-4a7d9a30                           0/1     Completed          0          1h
snapshot-76-9e168c9e                           0/1     Completed          0          1h
snapshot-76-ae510372                           0/1     Completed          0          1h
snapshot-76-f166eb18                           0/1     ImagePullBackOff   0          30m
train-169-65f88cec                             0/1     Error              0          20m
train-169-9c92f72a                             0/1     Error              0          1h
train-169-c935fc84                             0/1     Error              0          1h
train-169-d9593f80                             0/1     Error              0          1h
train-204-70729e42                             0/1     Error              0          20m
train-204-9203be3e                             0/1     Error              0          1h
train-204-d3f2337c                             0/1     Error              0          1h
train-204-e41a3e88                             0/1     Error              0          1h
train-245-7b65d1f2                             0/1     Error              0          19m
train-245-a7510d5a                             0/1     Error              0          1h
train-245-debf763e                             0/1     Error              0          1h
train-245-eec1908e                             0/1     Error              0          1h
train-76-86381784                              0/1     Completed          0          19m
train-76-b1fdc202                              0/1     Error              0          1h
train-76-e972af06                              0/1     Error              0          1h
train-76-f993c8d8                              0/1     Completed          0          1h
webserver-7fc9c69f4d-mnrjj                     2/2     Running            0          36m
worker-6997bf76bd-kvjx4                        2/2     Running            0          25m
worker-6997bf76bd-prxbg                        2/2     Running            0          36mand I'd like to get rid of the pods like train-204-d3f2337c. How can I do that?
You can do this a bit easier, now.
You can list all completed pods by:
kubectl get pod --field-selector=status.phase==SucceededAnd delete all completed pods by:
kubectl delete pod --field-selector=status.phase==SucceededIf this pods created by CronJob, you can use spec.failedJobsHistoryLimit and spec.successfulJobsHistoryLimit
Example:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: my-cron-job
spec:
  schedule: "*/10 * * * *"
  failedJobsHistoryLimit: 1
  successfulJobsHistoryLimit: 3
  jobTemplate:
    spec:
      template:
         ...Here's a one liner which will delete all pods which aren't in the Running or Pending state (note that if a pod name has Running or Pending in it, it won't get deleted ever with this one liner):
kubectl get pods --no-headers=true |grep -v "Running" | grep -v "Pending" | sed -E 's/([a-z0-9-]+).*/\1/g' | xargs kubectl delete pod
Here's an explanation:
RunningPendingxargs to delete each of the pods by nameNote, this doesn't account for all pod states. For example, if a pod is in the state ContainerCreating this one liner will delete that pod too.
You can do it on two ways.
$ kubectl delete pod $(kubectl get pods | grep Completed | awk '{print $1}')or
$ kubectl get pods | grep Completed | awk '{print $1}' | xargs kubectl delete podBoth solutions will do the job.