I have a Kubernetes Cron Job for running a scheduled task every 5 minutes. I want to make sure that when a new pod is created at next schedule time, the earlier pod should have been terminated. The earlier pod should get terminated before creation of new. Can Kubernetes terminate the earlier pod before creation of new?
My yaml is:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: my-scheduled
spec:
schedule: "*/5 * * * *"
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: 1
failedJobsHistoryLimit: 1
jobTemplate:
spec:
template:
spec:
containers:
- name: cmm-callout
env:
- name: SCHEDULED
value: "true"
livenessProbe:
httpGet:
path: /myapp/status
port: 7070
scheme: HTTPS
initialDelaySeconds: 120
timeoutSeconds: 30
periodSeconds: 120
image: gcr.io/projectid/folder/my-app:9.0.8000.34
restartPolicy: Never
How can I make sure the earlier pod is terminated before new is created?
Did you try to set the concurrencyPolicy to Replace? Forbid means to skip the new job run if the previous one hasn't finished yet.
https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/#concurrency-policy
Allow (default): The cron job allows concurrently running jobs
Forbid: The cron job does not allow concurrent runs; if it is time for a new job run and the previous job run hasn’t finished yet, the cron job skips the new job run
Replace: If it is time for a new job run and the previous job run hasn’t finished yet, the cron job replaces the currently running job run with a new job run
If i understood your case correctly (the earlier pod should have been terminated before creation of new one).
1. Please use spec.jobTemplate.spec.activeDeadlineSeconds instead.
By setting this parameter once a Job reaches activeDeadlineSeconds - all of running Pods will be terminated and the Job status will become type: Failed with reason DeadlineExceeded.
example:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/5 * * * *"
jobTemplate:
spec:
activeDeadlineSeconds: 60
template:
spec:
containers:
- name: hello
image: busybox
args:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster && sleep 420
restartPolicy: Never
2. The second solution is to set-up concurrencyPolicy. and replace the currently running job with a new job.
example:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/2 * * * *"
concurrencyPolicy: Replace
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
args:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster && sleep 420
restartPolicy: Never
Resources: