I have 6 replicas of a pod running which I would like to restart\recreate every 5 minutes.
This needs to be a rolling update - so that all are not terminated at once and there is no downtime. How do I achieve this?
I tried using cron job, but seems not to be working :
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: scheduled-pods-recreate
spec:
schedule: "*/5 * * * *"
concurrencyPolicy: Forbid
jobTemplate:
spec:
template:
spec:
containers:
- name: ja-engine
image: app-image
imagePullPolicy: IfNotPresent
restartPolicy: OnFailure
Although the job was created successfully and scheduled as per description below, it seems to have never run:
Name: scheduled-pods-recreate
Namespace: jk-test
Labels: <none>
Annotations: <none>
Schedule: */5 * * * *
Concurrency Policy: Forbid
Suspend: False
Starting Deadline Seconds: <unset>
Selector: <unset>
Parallelism: <unset>
Completions: <unset>
Pod Template:
Labels: <none>
Containers:
ja-engine:
Image: image_url
Port: <none>
Host Port: <none>
Environment: <none>
Mounts: <none>
Volumes: <none>
Last Schedule Time: Tue, 19 Feb 2019 10:10:00 +0100
Active Jobs: scheduled-pods-recreate-1550567400
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 23m cronjob-controller Created job scheduled-pods-recreate-1550567400
So first thing, how do I ensure that it is running so the pods are recreated?
Also how can I ensure no downtime?
The updated version of the cronjob:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
args:
- /bin/sh
- -c
- kubectl patch deployment runners -p '{"spec":{"template":{"spec":{"containers":[{"name":"jp-runner","env":[{"name":"START_TIME","value":"'$(date +%s)'"}]}]}}}}' -n jp-test
restartPolicy: OnFailure
The pods are not starting with the message Back-off restarting failed container and error as given below:
State: Terminated
Reason: Error
Exit Code: 127
There is no rolling-restart functionality in Kubernetes at the moment, but you can use the following command as a workaround to restart all pods in the specific deployment:
(replace deployment name and pod name with the real ones)
kubectl patch deployment mydeployment -p '{"spec":{"template":{"spec":{"containers":[{"name":"my-pod-name","env":[{"name":"START_TIME","value":"'$(date +%s)'"}]}]}}}}'
To schedule it, you can create a cron task on the master node to run this command periodically.
User owned the task should have correct kubectl
configuration (~/.kube/config
) with permissions to change the mentioned deployment object.
Default cluster admin configuration can be copied from /etc/kubernetes/admin.conf
:
(it is usually created by kubeadm init
):
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Two types of deployment update strategy can be specified: Recreate (.spec.strategy.type==Recreate.
) and Rolling update (.spec.strategy.type==RollingUpdate
).
Only by using Rolling Update strategy you can avoid service downtime. You can specify maxUnavailable
and maxSurge
parameters in the deployment YAML to control the rolling update process.