Running a cronjob in each min in k8s not working

5/16/2019
 SCHEDULE    SUSPEND   ACTIVE   LAST SCHEDULE   AGE
hello   * * * * *   False     2        2m42s           5m6s
hello   * * * * *   False   3     6s    5m30s
hello   * * * * *   False   4     6s    6m30s
hello   * * * * *   False   3     46s   7m10s
hello   * * * * *   False   1     56s   7m20s
hello   * * * * *   False   2     6s    7m30s
hello   * * * * *   False   0     26s   7m50s
hello   * * * * *   False   1     7s    8m31s
hello   * * * * *   False   0     16s   8m40s
hello   * * * * *   False   1     7s    9m31s
hello   * * * * *   False   0     17s   9m41s
hello   * * * * *   False   1     7s    10m

Im runnig K8S cronjob and Im using the following command to watch it

kubectl get cronjobs --watch -n ns1

when watching the output Im notice that for each min there is two job

e.g. see 2m1s and 2m11s and so on …

why ? I want to run it exactly one time in every minute , how can I do that ?

hello   * * * * *   False     0        <none>          4s
hello   * * * * *   False   1     7s    61s
hello   * * * * *   False   0     17s   71s
hello   * * * * *   False   1     7s    2m1s
hello   * * * * *   False   0     17s   2m11s
hello   * * * * *   False   1     7s    3m1s
hello   * * * * *   False   0     17s   3m11s

This is the docker file

apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: hello
  namespace: monitoring
spec:
  schedule: "* * * * *" # run every minute
  startingDeadlineSeconds: 10 # if a job hasn't starting in this many seconds, skip
  concurrencyPolicy: Forbid # either allow|forbid|replace
  successfulJobsHistoryLimit: 3 # how many completed jobs should be
  jobTemplate:
    spec:
      template:
        spec:
          containers:
            - name: hello
              image: busybox
              args:
                - /bin/sh
                - -c
                - date; echo Hello from the Kubernetes cluster
          restartPolicy: OnFailure

I tried also to change the schedule like "*/1 * * * *” which doesnt helps.

update

It seems that for each cronjob there is an entry like this

NAME    SCHEDULE      SUSPEND   ACTIVE   LAST SCHEDULE   AGE
hello             */1 * * * *                 False               1              0s            7s

and after 10 second I see

hello              */1 * * * *                 False              0              1 0s           17s

so on... one active and the second not

-- Rayn D
amazon-web-services
cron
kubernetes

1 Answer

5/16/2019

I think you are looking at the wrong thing.

CronJob is spawning a Job, so you should be looking at job logs:

$ kubectl get jobs
NAME               DESIRED   SUCCESSFUL   AGE
hello-1558019160   1         1            2m
hello-1558019220   1         1            1m
hello-1558019280   1         1            14s

As you can see there is only one spawned per minute. It's possible that the job will take longer then a minute to complete this is when concurrencyPolicy comes to play:

The .spec.concurrencyPolicy field is also optional. It specifies how to treat concurrent executions of a job that is created by this cron job. The spec may specify only one of the following concurrency policies:

  • Allow (default): The cron job allows concurrently running jobs
  • Forbid: The cron job does not allow concurrent runs; if it is time for a new job run and the previous job run hasn’t finished yet, the cron job skips the new job run
  • Replace: If it is time for a new job run and the previous job run hasn’t finished yet, the cron job replaces the currently running job run with a new job run

Note that concurrency policy only applies to the jobs created by the same cron job. If there are multiple cron jobs, their respective jobs are always allowed to run concurrently.

You can also do kubectl describe jobs hello-1558019160 in which you will see events:

Events:
  Type    Reason            Age   From            Message
  ----    ------            ----  ----            -------
  Normal  SuccessfulCreate  2m    job-controller  Created pod: hello-1558019160-fld74

I was running your .yaml and did not saw Active jobs being higher then 1.

Hope this helps.

-- Crou
Source: StackOverflow