Kubernetes delete pod job

10/11/2018

I wanted to know is it possible to have a job in Kubernetes that will run every hour, and will delete certain pods. I need this as a temporary stop gap to fix an issue.

-- user1555190
kubernetes
kubernetes-helm

3 Answers

10/11/2018

Use a CronJob (1, 2) to run the Job every hour.

K8S API can be accessed from Pod (3) with proper permissions. When a Pod is created a default ServiceAccount is assigned to it (4) by default. The default ServiceAccount has no RoleBinding and hence the default ServiceAccount and also the Pod has no permissions to invoke the API.

If a role (with permissions) is created and mapped to the default ServiceAccount, then all the Pods by default will get those permissions. So, it's better to create a new ServiceAccount instead of modifying the default ServiceAccount.

So, here are steps for RBAC (5)

  • Create a ServiceAccount
  • Create a Role with proper permissions (deleting pods)
  • Map the ServiceAccount with the Role using RoleBinding
  • Use the above ServiceAccount in the Pod definition
  • Create a pod/container with the code/commands to delete the pods

I know it's a bit confusing, but that's the way K8S works.

-- Praveen Sripati
Source: StackOverflow

10/11/2018

Yes, it's possible.

I think the easiest way is just to call the Kubernernes API directly from a job. Considering RBAC is configured, something like this:

apiVersion: batch/v1
kind: Job
metadata:
  name: cleanup
spec:
  serviceAccountName: service-account-that-has-access-to-api
  template:
    spec:
      containers:
      - name: cleanup
        image: image-that-has-curl
        command:
        - curl
        - -ik 
        - -X
        - DELETE
        - -H
        - "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)"
        - https://kubernetes.default.svc.cluster.local/api/v1/namespaces/{namespace}/pods/{name}
      restartPolicy: Never
  backoffLimit: 4

You can also run a kubectl proxy sidecar to connect to the cluster using localhost. More information here

Or even running plain kubectl in a pod is also an option: Kubernetes - How to run kubectl commands inside a container?

-- Rico
Source: StackOverflow

10/11/2018

There is another workaround possibly.

You could create a liveness probe (super easy if you have none already) that doesn't run until after one hour and always fail.

livenessProbe:
  tcpSocket:
    port: 1234
  initialDelaySeconds: 3600

This will wait 3600 seconds (1 hour) and then try to connect to port 1234 and if that fails it will kill the container (not the pod!).

-- Andreas Wederbrand
Source: StackOverflow