Kubernetes activeDeadlineSeconds not killing process

11/27/2018

I'm using activeDeadlineSeconds in my Job definition but it doesn't appear to have any effect. I have a CronJob that kicks off a job every minute, and I'd like that job to automatically kill off all its pods before another one is created (so 50 seconds seems reasonable). I know there are other ways to do this but this is ideal for our circumstances.

I'm noticing that the pods aren't being killed off, however. Are there any limitations with activeDeadlineSeconds? I don't see anything in the documentation for K8s 1.7 - https://v1-7.docs.kubernetes.io/docs/api-reference/v1.7/#jobspec-v1-batch I've also checked more recent versions.

Here is a condensed version of my CronJob definition -

apiVersion: batch/v2alpha1
kind: CronJob
metadata:
  name: kafka-consumer-cron
spec:
  schedule: "*/1 * * * *"
  jobTemplate:
    spec:  # JobSpec
      activeDeadlineSeconds: 50   # This needs to be shorter than the cron interval  ## TODO - NOT WORKING!
      parallelism: 1
      ...
-- s g
kubernetes

2 Answers

12/4/2018

It turns out this is actually a known bug in 1.7. It was fixed in version 1.8

https://github.com/openshift/origin/issues/10755 https://github.com/kubernetes/kubernetes/issues/32149

-- s g
Source: StackOverflow

11/27/2018

You can use concurrencyPolicy: "Replace". This will terminate previous running pod then start a new one.

Check comments from here: ConcurrencyPolicy

-- Emruz Hossain
Source: StackOverflow