Setup RBAC in Kubernetes for a Cronjob running kubectl

10/9/2019
➜  ~ kubectl version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.3", GitCommit:"2d3c76f9091b6bec110a5e63777c332469e0cba2", GitTreeState:"clean", BuildDate:"2019-08-19T11:13:54Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.10", GitCommit:"e3c134023df5dea457638b614ee17ef234dc34a6", GitTreeState:"clean", BuildDate:"2019-07-08T03:40:54Z", GoVersion:"go1.10.8", Compiler:"gc", Platform:"linux/amd64"}

I'm trying to run kubectl from a Cronjob to change the number of pods in a deployment.

I've created the Cronjob and its role like this, following the advice in https://stackoverflow.com/a/54908449/3477266

apiVersion: v1
kind: ServiceAccount
metadata:
  name: scheduled-autoscaler-service-account
  namespace: default

---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: scheduler-autoscaler-role
rules:
- apiGroups:
  - extensions
  - apps
  resources:
  - deployments
  verbs:
  - patch
  - get
  - list

---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: schedule-autoscaler-role-binding
subjects:
- kind: ServiceAccount
  name: scheduled-autoscaler-service-account
  namespace: default
roleRef:
  kind: Role
  name: schedule-autoscaler-role
  apiGroup: ""

---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: adwords-api-scale-up-cron-job
spec:
  schedule: "*/2 * * * *"
  jobTemplate:
    spec:
      activeDeadlineSeconds: 100
      template:
        spec:
          serviceAccountName: scheduled-autoscaler-service-account
          containers:
          - name: adwords-api-scale-up-container
            image: bitnami/kubectl:1.15-debian-9
            command:
              - bash
            args:
              - "-xc"
              - |
                kubectl scale --replicas=2 --v=7 
deployment/adwords-api-deployment
          restartPolicy: OnFailure

However I am getting the following error in the pods running this job:

Error from server (Forbidden): deployments.extensions "adwords-api-deployment" is forbidden: User "system:serviceaccount:default:scheduled-autoscaler-service-account" cannot get resource "deployments" in API group "extensions" in the namespace "default"

How could I debug what the problem is? It seems for me I have given all the permissions it complains about in the message, but it's still not working.

Thanks in advance

UPDATE: I solved my problem. It was just a typo in the Role name when defining it in the RoleBinding. The name there is wrong.

But I was only able to spot this after learning that I could check permissions with this command:

kubectl auth can-i list deployment --as=system:serviceaccount:default:scheduled-autoscaler-service-account -n default

I thought it was something more complicated, maybe by a lack of experience with Kubernetes.

-- luislhl
kubectl
kubernetes

2 Answers

10/9/2019

You might have to supply kubectl binary inside the corresponded Job template with a particular kubeconfig file from the source k8s cluster host machine, in order to establish connection to the target k8s cluster from within the relevant Pod, determining the sufficient information about the cluster, Authentication and Authorization mechanisms.

I've been applying some adjustments to the origin CronJob config, managing to add hostPath volume mount, mapping source k8s host machine kubeconfig path: $HOME/.kube under each Pod, released by a job:

apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: adwords-api-scale-up-cron-job
spec:
  schedule: "*/2 * * * *"
  jobTemplate:
    spec:
      activeDeadlineSeconds: 100
      template:
        spec:
          serviceAccountName: scheduled-autoscaler-service-account
          containers:
          - name: adwords-api-scale-up-container
            image: bitnami/kubectl:1.15-debian-9
            command:
              - bash
            args:
              - "-xc"
              - |
                kubectl scale --replicas=2 --v=7 deployment/adwords-api-deployment
            volumeMounts:
            - name: kubectl-config
              mountPath: /.kube/
              readOnly: true
          volumes:
          - name: kubectl-config
            hostPath:
              path: $HOME/.kube # Replace $HOME with an evident path location
          restartPolicy: OnFailure

I've checked RBAC rules that you've granted and they are fine, meanwhile reproducing your issue in the similar scenario on my environment.

-- mk_sta
Source: StackOverflow

10/9/2019

I don't think you can leave apiGroup blank in the binding. Try apiGroup: rbac.authorization.k8s.io?

-- coderanger
Source: StackOverflow