What I would like to do is to run some backup scripts on each of Kubernetes nodes periodically. I want it to run inside Kubernetes cluster in contrast to just adding script to each node's crontab. This is because I will store backup on the volume mounted to the node by Kubernetes. It differs from the configuration but it could be CIFS filesystem mounted by Flex plugin or awsElasticBlockStore
.
It would be perfect if CronJob
will be able to template DaemonSet
(instead of fixing it as jobTemplate
) and there will be possibility to set DaemonSet
restart policy to OnFailure
.
I would like to avoid defining n
different CronJobs
for each of n
nodes and then associate them together by defining nodeSelectors
since this will be not so convenient to maintain in environment where nodes count changes dynamically.
What I can see problem was discussed here without any clear conclusion: https://github.com/kubernetes/kubernetes/issues/36601
Maybe do you have any hacks or tricks to achieve this?
Is this still the best way to mimic CronJob
with a DaemonSet
template. ie. run a crontab on all the nodes (based on a node selector)
I'm not against it but its a bummer I need to run these pods the whole time. It would be nice to have the k8s scheduler handle all this for me.
You can use DaemonSet with the following bash script:
while :; do
currenttime=$(date +%H:%M)
if [[ "$currenttime" > "23:00" ]] || [[ "$currenttime" < "23:05" ]]; then
do_something
else
sleep 60
fi
test "$?" -gt 0 && notify_failed_job
done