Im trying to schedule a CronJob to launch a kubectl command. The cronjob does not start a pod. This is my cronjob
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: mariadump
namespace: my-namespace
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
serviceAccountName: mariadbdumpsa
containers:
- name: kubectl
image: garland/kubectl:1.10.4
command:
- /bin/sh
- -c
- kubectl get pods;echo 'DDD'
restartPolicy: OnFailure
I create the cronjob on openshift by:
oc create -f .\cron.yaml
Obtaining the following results
PS C:\Users\mymachine> oc create -f .\cron.yaml
cronjob.batch/mariadump created
PS C:\Users\mymachine> oc get cronjob -w
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
mariadump */1 * * * * False 0 <none> 22s
mariadump */1 * * * * False 1 10s 40s
mariadump */1 * * * * False 0 20s 50s
PS C:\Users\mymachine> oc get pods -w
NAME READY STATUS RESTARTS AGE
The cronjob does not start a pod, but if change to this cronjob(removing the serviceaccount)
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: mariadump
namespace: my-namespace
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: kubectl
image: garland/kubectl:1.10.4
command:
- /bin/sh
- -c
- kubectl get pod;echo 'DDD'
restartPolicy: OnFailure
it works as expected without having permissions.
PS C:\Users\myuser> oc get cronjob -w
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
mariadump */1 * * * * False 0 <none> 8s
mariadump */1 * * * * False 1 3s 61s
PS C:\Users\myuser> oc get pods -w
NAME READY STATUS RESTARTS AGE
mariadump-1616089500-mnfxs 0/1 CrashLoopBackOff 1 8s
PS C:\Users\myuser> oc logs mariadump-1616089500-mnfxs
Error from server (Forbidden): pods is forbidden: User "system:serviceaccount:my-namespace:default" cannot list resource "pods" in API group "" in the namespace "my-namespace"
For giving the cronjob the proper permissions I used this template to create the Role, the rolebinding and the ServiceAccount.
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: my_namespace
name: mariadbdump
rules:
- apiGroups:
- extensions
- apps
resources:
- deployments
- replicasets
verbs:
- 'patch'
- 'get'
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: mariadbdump
namespace: my_namespace
subjects:
- kind: ServiceAccount
name: mariadbdumpsa
namespace: my_namespace
roleRef:
kind: Role
name: mariadbdump
apiGroup: ""
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: mariadbdumpsa
namespace: my_namespace
Anyone can help me to know why the cronjob with the ServiceAccount is not working?
Thanks
With this yaml is actually working
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: my-namespace
name: mariadbdump
rules:
- apiGroups:
- ""
- ''
resources:
- deployments
- replicasets
- pods
- pods/exec
verbs:
- 'watch'
- 'get'
- 'create'
- 'list'
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: mariadbdump
namespace: my-namespace
subjects:
- kind: ServiceAccount
name: mariadbdumpsa
namespace: my-namespace
roleRef:
kind: Role
name: mariadbdump
apiGroup: ""
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: mariadbdumpsa
namespace: my-namespace
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: mariadump
namespace: my-namespace
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
serviceAccountName: mariadbdumpsa
containers:
- name: kubectl
image: garland/kubectl:1.10.4
command:
- /bin/sh
- -c
- kubectl exec $(kubectl get pods | grep Running | grep 'mariadb' | awk '{print $1}') -- /opt/rh/rh-mariadb102/root/usr/bin/mysqldump --skip-lock-tables -h 127.0.0.1 -P 3306 -u userdb --password=userdbpass databasename >/tmp/backup.sql;kubectl cp my-namespace/$(kubectl get pods | grep Running | grep 'mariadbdump' | awk '{print $1}'):/tmp/backup.sql my-namespace/$(kubectl get pods | grep Running | grep 'mariadb' | awk '{print $1}'):/tmp/backup.sql;echo 'Backup done'
restartPolicy: OnFailure