We are using Helm Chart for deploying out application in Kubernetes cluster.
We have a statefulsets and headless service. To initialize mTLS, we have created a 'job' kind and in 'command' we are passing shell & python scripts are an arguments. And created a 'cronjob' kind to update of certificate.
We have written a 'docker-entrypoint.sh' inside 'docker image' for some initialization work & to generate TLS certificates.
Questions to ask :
What are the other steps taken by Kubernetes ? Would you also share container insights ?
Kubernetes and not helm will restart a failed container by default unless you set restartPolicy: Never
in pod spec
Restarting of container is exactly same as starting it out first time. Hence in restart you can expect things to happen same way as it would when starting the container for first time.
Internally kubelet agent running in each kubernetes node delegates the task of starting a container to OCI complaint container runtime such as docker, containerd etc which then spins up the docker image as a container on the node.
I would expect entrypoint script to be executed in both start a restart of a container.
Does it deploy new docker image if pod fails/restarts ?
It creates a new container with same image as specified in the pod spec.
Does 'job' & 'cronjob' execute if container restarts ?
If a container which is part of cronjob fails kubernetes will keep restarting(unless restartPolicy: Never
in pod spec) the container till the time job is not considered as failed .Check this for how to make a cronjob not restart a container on failure. You can specify backoffLimit
to control number of times it will retry before the job is considered failed.
Scaling up is equivalent of scheduling and starting yet another instance of the same container on the same or altogether different Kubernetes node.
As a side note you should use higher level abstraction such as deployment instead of pod because when a pod fails Kubernetes tries to restart it on same node but when a deployment fails Kubernetes will try to restart it in other nodes as well if it's not able to start the pod on it's current scheduled node.