I was wondering if it is possible to run a specific command (Example: echo "foo") at a specific time in all existing pods (pods that are not in the default namespace are included). It would be like a cronJob, but the only difference is that I want to specify/deploy it in one place only. Is that even possible?
There you go:
for ns in $(kubectl get ns -oname | awk -F "/" '{print $2}'); do for pod in $(kubectl get po -n $ns -oname | awk -F "/" '{print $2}'); do kubectl exec $pod -n $ns echo foo; done; done
It will return en error if echo
(or the command) is not available in the container. Other then that, should work.
It is possible. Please find the steps I followed, hope it help you.
First, create a simple script to read pod's name, exec
and execute the command.
import os, sys
import logging
from datetime import datetime
logging.basicConfig(stream=sys.stdout, level=logging.DEBUG)
dt = datetime.now()
ts = dt.strftime("%d-%m-%Y-%H-%M-%S-%f")
pods = os.popen("kubectl get po --all-namespaces").readlines()
for pod in pods:
ns = pod.split()[0]
po = pod.split()[1]
try:
h = os.popen("kubectl -n %s exec -i %s sh -- hostname" %(ns, po)).read()
os.popen("kubectl -n %s exec -i %s sh -- touch /tmp/foo-%s.txt" %(ns, po, ts))
logging.debug("Executed on %s" %h)
except Exception as e:
logging.error(e)
Next, Dockerize the above script, build and push.
FROM python:3.8-alpine
ENV KUBECTL_VERSION=v1.18.0
WORKDIR /foo
ADD https://storage.googleapis.com/kubernetes-release/release/${KUBECTL_VERSION}/bin/linux/amd64/kubectl .
RUN chmod +x kubectl &&\
mv kubectl /usr/local/bin
COPY foo.py .
CMD ["python", "foo.py"]
Later we'll use this image in CronJob. You can see I have installed kubectl in the Dockerfile to trigger the kubectl commands. But it is insufficient, we should add clusterole
and clusterrolebinding
to the service account which runs the CronJob.
I have created a ns foo
and I bound foo's default service account to cluster role I created as shown below.
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: foo
rules:
- apiGroups: [""]
resources: ["pods", "pods/exec"]
verbs: ["get", "list", "create"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: foo
subjects:
- kind: ServiceAccount
name: default
namespace: foo
roleRef:
kind: ClusterRole
name: foo
apiGroup: rbac.authorization.k8s.io
Now service account default of foo has permissions to get, list, exec
to all the pods in the cluster.
Finally create a cronjob to run the task.
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: foo
spec:
schedule: "15 9 * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: foo
image: harik8/sof:62177831
imagePullPolicy: Always
restartPolicy: OnFailure
Login to the pods and check, it should have created file with timestamp at /tmp
directory of each pod.
$ kubectl exec -it app-59666bb5bc-v6p2h sh
# ls -lah /tmp
-rw-r--r-- 1 root root 0 Jun 4 09:15 foo-04-06-2020-09-15-06-792614.txt
logs
error: cannot exec into a container in a completed pod; current phase is Failed
error: cannot exec into a container in a completed pod; current phase is Succeeded
DEBUG:root:Executed on foo-1591262100-798ng
DEBUG:root:Executed on grafana-5f6f8cbf75-jtksp
DEBUG:root:Executed on istio-egressgateway-557dcf8d8-npfnd
DEBUG:root:Executed on istio-ingressgateway-6489d9556d-2dp7j
command terminated with exit code 126
DEBUG:root:Executed on OCI runtime exec failed: exec failed: container_linux.go:349: starting container process caused "exec: \"hostname\": executable file not found in $PATH": unknown
DEBUG:root:Executed on istiod-774777b79-mvmqm
It is possible but a bit complicated and you would need to write everything yourself, as there is no automatic tools to do that as far as I'm aware.
You could use Kubernetes API to collect all pod
names, use those in a loop to push kubectl exec pod_name command
to all those pods.
To list all pods
in a cluster GET /api/v1/pods
, this will also list the system ones.
This script
could be run using Kubernetes CronJob at your specified time.