I have encountered a problem that does not stop immediately even if I delete pod.
What should be fixed in order to terminate normally?
apiVersion: apps/v1 kind: Deployment metadata: name: cmd-example spec: replicas: 1 selector: matchLabels: app: cmd-example template: metadata: labels: app: cmd-example spec: terminationGracePeriodSeconds: 30 containers: - name: cmd-container image: alpine:3.8 resources: requests: cpu: 100m memory: 100Mi command: ["/bin/sh"] args: ["-c", "while true; do exec sleep 100;done"]
$ kubectl apply -f deployments.yaml
kubectl delete-f 020-deployments.yaml
kubectl get po -woutput is.
cmd-example-5cccf79598-zpvmz 1/1 Running 0 2s cmd-example-5cccf79598-zpvmz 1/1 Terminating 0 6s cmd-example-5cccf79598-zpvmz 0/1 Terminating 0 37s cmd-example-5cccf79598-zpvmz 0/1 Terminating 0 38s cmd-example-5cccf79598-zpvmz 0/1 Terminating 0 38s
This should finish faster.
It took about 30 seconds to complete. Perhaps it is due to SIGKILL at the time of terminationGracePeriodSeconds 30s.
Why is not pod cleanup immediately with SIGTERM?
What should be fixed?
I confirmed it in the following environment.
Your pod literally does nothing. If you just want something where you can do occasional interactive debugging "inside the cluster", consider kubectl run to get a one-off interactive container
kubectl run --rm -it --name debug --image alpine:3.8
In terms of the command your pod spec is trying to run, rewriting it in shell script form:
#!/bin/sh # Forever: while true do # Replace this shell with a process that sleeps for # 100 ms, then exits exec sleep 100 # The shell no longer exists and you'll never get here done
I'm not clear what the pod is trying to do, but it at least won't exit if you remove the
exec. (It will still sit in an idle loop forever.)
This shell is that it does not stop even if it accepts the signal of SIGTERM.
Using the trap command.
command: ["/bin/sh"] args: ["-c", "trap 'exit 0' 15;while true; do exec sleep 100 & wait $!; done"]
after delete, pod was cleaned up as soon!
img-example-d68954677-mwsqp 1/1 Running 0 2s img-example-d68954677-mwsqp 1/1 Terminating 0 8s img-example-d68954677-mwsqp 0/1 Terminating 0 10s img-example-d68954677-mwsqp 0/1 Terminating 0 11s img-example-d68954677-mwsqp 0/1 Terminating 0 11s
Hiroki Matsumoto, the pod termination is behaving just like it was designed to behave. As you can find in documentation section on Pods:
Because pods represent running processes on nodes in the cluster, it is important to allow those processes to gracefully terminate when they are no longer needed (vs being violently killed with a KILL signal and having no chance to clean up).
Long story short (based on official documentation)
1) When you run
kubectl delete -f deployments.yaml you send a command with time of grace period (by default 30 seconds)
2) when you run
kubectl get pods you can see it has
3) Kubelet sees this state and Pod starts to shutdown.
4) After the grace period is over, if there is any processes still running it is killed with SIGKILL
So to delete a pod immediately you have to lower the grace period to 0 and run a forced/immediate deletion:
kubectl delete -f deployments.yaml --grace-period=0 --force and this results in an instant deletion.