Kubernetes - delete all jobs in bulk

4/28/2017

I can delete all jobs inside a custer running

kubectl delete jobs --all 

However, jobs are deleted one after another which is pretty slow (for ~200 jobs I had the time to write this question and it was not even done).

Is there a faster approach ?

-- Overdrivr
kubectl
kubernetes

10 Answers

7/6/2019

Kubectl bulk (bulk-action on krew) plugin may be useful for you, it gives you bulk operations on selected resources. This is the command for deleting jobs ' kubectl bulk jobs delete '

You could check details in https://github.com/emreodabas/kubectl-plugins/blob/master/README.md#kubectl-bulk-aka-bulk-action

-- Emre Odabaş
Source: StackOverflow

8/8/2019

This works really well for me:

kubectl delete jobs $(kubectl get jobs -o custom-columns=:.metadata.name)

-- Cizer Pereira
Source: StackOverflow

5/5/2017

Probably, there's no other way to delete all job at once,because even kubectl delete jobs also queries one job at a time, what Norbert van Nobelen suggesting might get faster result, but it will make much difference.

-- Suraj Narwade
Source: StackOverflow

5/31/2019

kubectl delete jobs --all --cascade=false is fast, but won't delete associated resources, such as Pods

https://github.com/kubernetes/kubernetes/issues/8598

-- Maximilian
Source: StackOverflow

6/25/2018

It's a little easier to setup an alias for this bash command:

kubectl delete jobs `kubectl get jobs -o custom-columns=:.metadata.name`
-- jaydeland
Source: StackOverflow

5/5/2017

I have a script for deleting which was quite faster in deleting:

$ cat deljobs.sh 
set -x

for j in $(kubectl get jobs -o custom-columns=:.metadata.name)
do
    kubectl delete jobs $j &
done

And for creating 200 jobs used following script with the command for i in {1..200}; do ./jobs.sh; done

$ cat jobs.sh 
kubectl run memhog-$(cat /dev/urandom | tr -dc 'a-z0-9' | fold -w 8 | head -n 1)  --restart=OnFailure --record --image=derekwaynecarr/memhog --command -- memhog -r100 20m
-- surajd
Source: StackOverflow

5/5/2017

If you are using CronJob and those are piling up quickly, you can let kubernetes delete them automatically by configuring job history limit described in documentation. That is valid starting from version 1.6.

...
  spec:
    ...
    successfulJobsHistoryLimit: 3
    failedJobsHistoryLimit: 3
-- yasc
Source: StackOverflow

7/6/2017

I use this script, it's fast but it can trash CPU (a process per job), you can always adjust the sleep parameter:

#!/usr/bin/env bash

echo "Deleting all jobs (in parallel - it can trash CPU)"

kubectl get jobs --all-namespaces | sed '1d' | awk '{ print $2, "--namespace", $1 }' | while read line; do
  echo "Running with: ${line}"
  kubectl delete jobs ${line} &
  sleep 0.05
done
-- Paweł Prażak
Source: StackOverflow

12/17/2019

The best way for me is (for completed jobs older than a day):

kubectl get jobs | grep 1/1 | gawk 'match($0, / ([0-9]*)h/, ary) { if(ary[1]>24) print $1}' | parallel -r --bar -P 32 kubectl delete jobs

grep 1/1 for completed jobs

gawk 'match($0, / ([0-9]*)h/, ary) { if(ary[1]>24) print $1}' for jobs older than a day

-P number of parallel processes

It is faster than kubectl delete jobs --all, has a progress bar and you can use it when some jobs are still running.

-- FullMoon
Source: StackOverflow

3/9/2019

Parallelize using GNU parallel

parallel --jobs=5 "echo {}; kubectl delete jobs {} -n core-services;" ::: $(kubectl get job -o=jsonpath='{.items[?(@.status.succeeded==1)].metadata.name}'  -n core-services)
-- Vikas Kumar
Source: StackOverflow