Why "kubectl describe job xxx" got TooManyActivePods?

6/8/2017

I‘m running the job example https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/

apiVersion: batch/v1
kind: Job
metadata:
  name: pi
spec:
  template:
    metadata:
      name: pi
    spec:
      containers:
      - name: pi
        image: perl
        command: ["perl",  "-Mbignum=bpi", "-wle", "print bpi(2000)"]
      restartPolicy: Never

I got some warning

  23m           23m             1       {job-controller }                       Normal          SuccessfulCreate        Created pod: pi-5n0vn
  23m           23m             1       {job-controller }                       Normal          SuccessfulDelete        Deleted pod: pi-5n0vn
  23m           23m             1       {job-controller }                       Normal          SuccessfulDelete        Deleted pod: pi-qlp5f
  23m           23m             1       {job-controller }                       Normal          SuccessfulCreate        Created pod: pi-j9z6s
  23m           23m             1       {job-controller }                       Normal          SuccessfulCreate        Created pod: pi-qlp5f
  23m           23m             1       {job-controller }                       Normal          SuccessfulCreate        Created pod: pi-mf1j9
  23m           23m             1       {job-controller }                       Normal          SuccessfulDelete        Deleted pod: pi-mf1j9
  23m           23m             1       {job-controller }                       Normal          SuccessfulDelete        Deleted pod: pi-j9z6s
  23m           23m             1       {job-controller }                       Normal          SuccessfulDelete        Deleted pod: pi-qlp5f
  23m           23m             1       {job-controller }                       Normal          SuccessfulCreate        Created pod: pi-w3m2m
  23m           23m             1       {job-controller }                       Normal          SuccessfulDelete        Deleted pod: pi-qlp5f
  23m           23m             1       {job-controller }                       Normal          SuccessfulDelete        Deleted pod: pi-5n0vn
  23m           23m             1       {job-controller }                       Normal          SuccessfulCreate        Created pod: pi-nww4h
  23m           23m             2       {job-controller }                       Normal          SuccessfulDelete        Deleted pod: pi-p8pt9
  23m           23m             1       {job-controller }                       Warning         FailedDelete            Error deleting: pods "pi-mf1j9" not found
  23m           23m             1       {job-controller }                       Normal          SuccessfulDelete        Deleted pod: pi-w3m2m
  23m           23m             1       {job-controller }                       Normal          SuccessfulCreate        Created pod: pi-69l9r
  23m           23m             1       {job-controller }                       Normal          SuccessfulCreate        Created pod: pi-p8pt9
  23m           23m             1       {job-controller }                       Normal          SuccessfulDelete        Deleted pod: pi-69l9r
  23m           23m             1       {job-controller }                       Normal          SuccessfulDelete        Deleted pod: pi-p8pt9
  23m           23m             1       {job-controller }                       Warning         TooManyActivePods       Too many active pods running after completion count reached
  23m           23m             1       {job-controller }                       Warning         TooManyActivePods       Too many active pods running after completion count reached
  23m           23m             1       {job-controller }                       Warning         TooManyActivePods       Too many active pods running after completion count reached

Why start so many pod and delete it?

I can't delete the job

[root@c3-sa-i2-20151229-buf023 ~]# kubectl delete job pi
error: timed out waiting for "pi" to be synced
-- x1957
kubernetes

1 Answer

6/8/2017

Try adding --grace-period=0 --force to your delete command.

https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods

The kubectl delete command supports the --grace-period= option which allows a user to override the default and specify their own value. The value 0 force deletes the pod. In kubectl version >= 1.5, you must specify an additional flag --force along with --grace-period=0 in order to perform force deletions.

-- Janos Lenart
Source: StackOverflow