I have a Kubernetes Cluster where the same application is running a few times but with different namespaces. Imagine
ns=app1 name=app1
ns=app2 name=app2
ns=app3 name=app3
[...]
ns=app99 name=app99
Now I need to execute a cronjob every 10 minutes in all of those Pods. The path is the same everytime.
Is there a 'best way' to achieve this?
I was thinking of a kubectl image running as 'CronJob' kind and something like this:
kubectl get pods -A -o=custom-columns='DATA:spec.containers[?(@.image=="registry.local/app-v1")].image' | xargs -i kubectl exec {} /usr/bin/scrub.sh
but I am pretty sure this is not the right way to go about this.
As mentioned by me and @Argha Sadhu one of the options would be to create cronjobs for all the pods, but it would generate 100 pods every 10 minutes, so as @LucidEx mentioned that would be fine with storage in the cloud, but not that much in his environment.
Concerning the storage it would be fine if it was some storage in a cloud I don't have to care about, but since it's a shared ceph storage with all it's overheads (especially ram and cpu) when you claim a volume and the need to have them zero'd on delete creating/deleting 100 storage claims every 10 minutes just isn't viable in my environment. – LucidEx
Another options can be found at this older stackoverflow question, similar question was asked here.
As @LucidEx mentioned
I'll probably roll with a bash loop/routine instead of that python code snippet but will go with that approach.
This python code snippet is here.