Kubernetes Job is not getting terminated even after specifying "activeDeadlineSeconds"

9/25/2019

My yaml file

apiVersion: batch/v1
kind: Job
metadata:
  name: auto
  labels:
    app: auto
spec:
  backoffLimit: 5
  activeDeadlineSeconds: 100
  template:
    metadata:
      labels:
        app: auto
    spec:
      containers:
      - name: auto
        image: busybox
        imagePullPolicy: Always
        ports:
        - containerPort: 9080
      imagePullSecrets: 
      - name: imageregistery
      restartPolicy: Never

The pods are killed appropriately but the job ceases to kill itself post 100 seconds.

Is there anything that we could do to kill the job post the container/pod's functionality is completed.

kubectl version --short

Client Version: v1.6.1
Server Version: v1.13.10+IKS


kubectl get jobs --namespace abc
NAME          DESIRED   SUCCESSFUL   AGE
auto   1         1            26m

Thank you,

-- anish anil
kubectl
kubernetes
kubernetes-pod

1 Answer

9/30/2019

The default way to delete jobs after they are done is to use kubectl delete command.

As mentioned by @Erez:

Kubernetes is keeping pods around so you can get the logs,configuration etc from it.

If you don't want to do that manually you could write a script running in your cluster that would check for jobs with completed status and than delete them.

Another way would be to use TTL feature that deletes the jobs automatically after a specified number of seconds. However, if you set it to zero it will clean them up immediately. For more details of how to set it up look here.

Please let me know if that helped.

-- OhHiMark
Source: StackOverflow