K8 Pod Lifetime: Is Cleanup Necessary?

3/7/2018

The Kubernetes Docs say the following:

In general, Pods do not disappear until someone destroys them. This might be a human or a controller. The only exception to this rule is that Pods with a phase of Succeeded or Failed for more than some duration (determined by the master) will expire and be automatically destroyed.

What is the default value for this duration and how do I set it? My pods also never enter the Succeeded or Failed phase, rather they enter Completed or Error phase respectively. Is this to be expected; are the docs out of date?

I check the pod phases using kubectl get pods --show-all, where information about them seems to persist. Is there any additional cleanup necessary? Running kubectl get pods without --show-all does not show any pods after they destroyed.

I am creating pods with kubectl apply -f k8/dummy-pod.yaml and the following yaml file:

apiVersion: v1
kind: Pod
metadata:
  name: dummy.3
  labels:
    vara: a
    role: idk
spec:
  hostNetwork: true
  restartPolicy: Never
  containers:
  - image: gcr.io/gv-test-196801/dummy:v2
    name: dummy-1
-- Geige V
kubernetes

1 Answer

3/7/2018

I believe this documentation is out of date.
Pod garbage collection using TTL was abandoned in favor of a threshold number of terminated pods. --terminated-pod-gc-threshold on the kube controller manager (docs here).

Currently deleting a DaemonSet, Deployment, ReplicaSet or StatefulSet will orphan its pods by default.
You can work around this by enabling cascading deletes
This behavior will change in 1.10

Prior to apps/v1 the default garbage collection policy for Pods in a DaemonSet, Deployment, ReplicaSet, or StatefulSet, was to orphan the Pods. That is, if you deleted one of these kinds, the Pods that they owned would not be deleted automatically unless cascading deletion was explicitly specified

see kubernetes blog

-- stacksonstacks
Source: StackOverflow