I am testing lifecycle hooks, and post-start works pretty well, but I think pre-stop never gets executed. There is another answer, but it is not working, and actually if it would work, it would contradict k8s documentation. So, from the docs:
PreStop
This hook is called immediately before a container is terminated due to an API request or management event such as liveness probe failure, preemption, resource contention and others. A call to the preStop hook fails if the container is already in terminated or completed state.
So, the API request makes me think I can simply do kubectl delete pod POD
, and I am good.
More from the docs (pod shutdown process):
1.- User sends command to delete Pod, with default grace period (30s)
2.- The Pod in the API server is updated with the time beyond which the Pod is considered “dead” along with the grace period.
3.- Pod shows up as “Terminating” when listed in client commands
4.- (simultaneous with 3) When the Kubelet sees that a Pod has been marked as terminating because the time in 2 has been set, it begins the pod shutdown process.
4.1.- If one of the Pod’s containers has defined a preStop hook, it is invoked inside of the container. If the preStop hook is still running after the grace period expires, step 2 is then invoked with a small (2 second) extended grace period.
4.2.- The container is sent the TERM signal. Note that not all containers in the Pod will receive the TERM signal at the same time and may each require a preStop hook if the order in which they shut down matters.
...
So, since when you do kubectl delete pod POD
, the pod gets on Terminating
, I assume I can do it.
From the other answer, I can't do this, but the way is to do a rolling-update. Well, I tried in all possible ways and it didn't work either.
My tests:
I have a deployment:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-deploy
spec:
replicas: 1
template:
metadata:
name: lifecycle-demo
labels:
lifecycle: demo
spec:
containers:
- name: nginx
image: nginx
lifecycle:
postStart:
exec:
command:
- /bin/sh
- -c
- echo "Hello at" `date` > /usr/share/post-start
preStop:
exec:
command:
- /bin/sh"
- -c
- echo "Goodbye at" `date` > /usr/share/pre-stop
volumeMounts:
- name: hooks
mountPath: /usr/share/
volumes:
- name: hooks
hostPath:
path: /usr/hooks/
I expect the pre-stop
and post-start
files to be created in /usr/hooks/
, on the host (node where the pod is running). post-start is there, but pre-stop, never.
kubectl delete pod POD
, and it didn't work.kubectl replace -f deploy.yaml
, with a different image, and when I do kubectl get rs
, I can see the new replicaSet created, but the file isn't there.kubectl set image ...
, and again, I can see the new replicaSet
created, but the file isn't there.What I have not tried is to bomb the container and break it by setting low CPU limit, but that's not what I need.
Any idea what are the circumstances under which preStop
hook would get triggered?