What is the default .spec.activeDeadlineSeconds in kubernetes job in you don't explicitly set it

8/19/2021

In Kubernetes job, there is a spec for .spec.activeDeadlineSeconds. If you don't explicitly set it, what will be the default value? 600 secs?

here is the example from k8s doc

apiVersion: batch/v1
kind: Job
metadata:
  name: pi-with-timeout
spec:
  backoffLimit: 5
  activeDeadlineSeconds: 100
  template:
    spec:
      containers:
      - name: pi
        image: perl
        command: ["perl",  "-Mbignum=bpi", "-wle", "print bpi(2000)"]
      restartPolicy: Never

assume I remove the line

activeDeadlineSeconds: 100
-- xsqian
kubernetes

2 Answers

8/19/2021

It is not set by default. Here is a note from changeLog:

ActiveDeadlineSeconds is validated in workload controllers now, make sure it's not set anywhere (it shouldn't be set by default and having it set means your controller will restart the Pods at some point) (#38741)

-- P....
Source: StackOverflow

9/1/2021

By default, a Job will run uninterrupted. If you don't set activeDeadlineSeconds, the job will not have active deadline limit. It means activeDeadlineSeconds doesn't have default value.

By the way, there are several ways to terminate the job.(Of course, when a Job completes, no more Pods are created.)

  • Pod backoff failure policy(.spec,.backofflimit) You can set .spec.backoffLimit to specify the number of retries before considering a Job as failed. The back-off limit is set by default to 6. Failed Pods associated with the Job are recreated by the Job controller with an exponential back-off delay (10s, 20s, 40s ...) capped at six minutes. The back-off count is reset when a Job's Pod is deleted or successful without any other Pods for the Job failing around that time.

  • Setting an active deadline(.spec.activeDeadlineSeconds) The activeDeadlineSeconds applies to the duration of the job, no matter how many Pods are created. Once a Job reaches activeDeadlineSeconds, all of its running Pods are terminated and the Job status will become type: Failed with reason: DeadlineExceeded.

Note that a Job's .spec.activeDeadlineSeconds takes precedence over its .spec.backoffLimit. Therefore, a Job that is retrying one or more failed Pods will not deploy additional Pods once it reaches the time limit specified by activeDeadlineSeconds, even if the backoffLimit is not yet reached.

-- James Wang
Source: StackOverflow