EDIT: Question is solved, it was my mistake, i simply used the wrong cron settings. I assumed "* 2 * * *" would only run once per day at 2, but in fact it runs every minute past the hour 2. So Kubernetes behaves correctly.
I keep having multiple jobs running at one cron execution point. But it seems only if those jobs have a very short runtime. Any idea why this happens and how I can prevent it? I use concurrencyPolicy: Forbid
, backoffLimit: 0
and restartPolicy: Never
.
Example for a cron job that is supposed to run once per day, but runs multiple times just after its scheduled run time:
job-1554346620 1/1 11s 4h42m
job-1554346680 1/1 11s 4h41m
job-1554346740 1/1 10s 4h40m
Relevant config:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: job
spec:
schedule: "* 2 * * *"
concurrencyPolicy: Forbid
jobTemplate:
spec:
template:
spec:
containers:
- name: job
image: job_image:latest
command: ["rake", "run_job"]
restartPolicy: Never
imagePullSecrets:
- name: regcred
backoffLimit: 0
Hi it's not clear what you expected - looking into the question but if I understand correctly you mean not running all cronjobs at the same time:
1. First option - it's to change their schedule time,
2. Second option try to use in your spec template other options like - Parallel Jobs - described: https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/
"For a work queue Job, you must leave .spec.completions unset, and set .spec.parallelism to a non-negative integer"
jobTemplate:
spec:
parallelism: 1
template:
To recreate this task please provide more details.
In addition for "Jobs History" by default successfulJobsHistoryLimit and failedJobsHistoryLimit are set to 3 and 1 respectively.
Please take at: https://kubernetes.io/docs/tasks/job/ If you are interested you can set-up limit in "spec" section:
successfulJobsHistoryLimit: 1
failedJobsHistoryLimit: 1
Hope this help.