I have a cronjob that runs every 10 minutes. So every 10 minutes, a new pod is created. After a day, I have a lot of completed pods (not jobs, just one cronjob exists). Is there way to automatically get rid of them?
Specifically for my situation, my pods where not fully terminating as I was running one container with the actual job, another with cloud sql proxy, and cloud sql proxy was preventing the pod from completing successfully.
The fix was to kill the proxy process after 30 seconds (my jobs typically take couple of seconds). Then once the job completes, successfulJobsHistoryLimit on the cronjob kicks in and keeps (by default) only 3 last pods.
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command: ["sh", "-c"]
args:
- /cloud_sql_proxy -instances=myinstance=tcp:5432 -credential_file=/secrets/cloudsql/credentials.json & pid=$! && (sleep 30 && kill -9 $pid 2>/dev/null)
That's a work for labels.
Use them on your CronJob
and delete completed pods using a selector
(-l
flag).
For example:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: my-cron
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
metadata:
labels:
app: periodic-batch-job
is-cron: "true"
spec:
containers:
- name: cron
image: your_image
imagePullPolicy: IfNotPresent
restartPolicy: OnFailure
Delete all cron-labeled pods with:
kubect delete pod -l is-cron