I can delete a deployment with the kubectl cli, but is there a way to make my deployment auto-destroy itself once it has finished? For my situation, we are kicking off a long-running process in a Docker container on AWS EKS. When I check the status, it is 'running', and then sometime later the status is 'completed'. So is there any way to get the kubernetes pod to auto destroy once it as finished?
kubectl run some_deployment_name --image=path_to_image
kubectl get pods
//the above command returns...
some_deployment_name1212-75bfdbb99b-vt622 0/1 Running 2 23s
//and then some time later...
some_deployment_name1212-75bfdbb99b-vt622 0/1 Completed 2 15m
Once it is complete, I would like for it to be destroyed, without me having to call another command.
So the question is about running Jobs and not deployments as in the Kubernetes Deployments abstraction that creates a ReplicaSet but more like Kubernetes Jobs
A Job
is created with kubectl run
when you specify the --restart=OnFailure
option. These jobs are not cleaned up by the cluster unless you delete them manually with kubectl delete <pod-name>
. More info here.
If you are using Kubernetes 1.12 or later a new Job spec was introduced: ttlSecondsAfterFinished
. You can also use that to clean up your jobs. Another more time-consuming option would be to write your own Kubernetes controller that cleans up regular Jobs.
A CronJob
is created if you specify both the --restart=OnFailure
and `--schedule="" option. These pods get deleted automatically because they run on a regular schedule.
More info on kubectl run
here.