Jenkins via Helm on GKE creates and does not remove slave pod for every build

10/3/2018

I'm using Jenkins setup on GKE installed via standard Helm chart. My builds are consistently failing which I'm trying to troubleshoot, but in addition to that a new slave pod is created on every build attempt (with pod name like jenkins-slave-3wsb7). Almost all of them go to a Completed state after build fails, and then the pod lingers in my GKE dash and in list of pods from kubectl get pods. I currently have 80+ pods showing as a result.

Is this expected behavior? Is there a work around to clean up old Completed pods?

Thanks.

-- Murcielago
google-cloud-platform
google-kubernetes-engine
jenkins
kubernetes
kubernetes-helm

2 Answers

10/4/2018

If you are using Kubernetes 1.12 or later. The ttlSecondsAfterFinished Job spec was conveniently introduced. Note that it's 'alpha' in 1.12.

apiVersion: batch/v1
kind: Job
metadata:
  name: job-with-ttl
spec:
  ttlSecondsAfterFinished: 100 <====
  template:
    spec:
      containers:
      - name: myjob
        image: myimage
        command: ["run_some_batch_job"]
      restartPolicy: Never
-- Rico
Source: StackOverflow

10/3/2018

For the workaround to clean up completed pods :

kubectl delete pod NAME --grace-period=0 --force
--
Source: StackOverflow