How to run kubectl within a job in a namespace?

3/27/2020

Hi I saw this documentation where kubectl can run inside a pod in the default pod. Is it possible to run kubectl inside a Job resource in a specified namespace? Did not see any documentation or examples for the same..

When I tried adding serviceAccounts to the container i got the error:

Error from server (Forbidden): pods is forbidden: User "system:serviceaccount:my-namespace:internal-kubectl" cannot list resource "pods" in API group "" in the namespace "my-namespace"

This was when i tried sshing into the container and running the kubctl.

Editing the question.....

As I mentioned earlier, based on the documentation I had added the service Accounts, Below is the yaml:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: internal-kubectl  
  namespace: my-namespace   
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: modify-pods
  namespace: my-namespace
rules:
  - apiGroups: [""]
    resources:
      - pods
    verbs:
      - get
      - list
      - delete      
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: modify-pods-to-sa
  namespace: my-namespace
subjects:
  - kind: ServiceAccount
    name: internal-kubectl
roleRef:
  kind: Role
  name: modify-pods
  apiGroup: rbac.authorization.k8s.io      
---
apiVersion: batch/v1
kind: Job
metadata:
  name: testing-stuff
  namespace: my-namespace
spec:
  template:
    metadata:
      name: testing-stuff
    spec:
      serviceAccountName: internal-kubectl
      containers:
      - name: tester
        image: bitnami/kubectl
        command:
         - "bin/bash"
         - "-c"
         - "kubectl get pods"
      restartPolicy: Never 

On running the job, I get the error:

Error from server (Forbidden): pods is forbidden: User "system:serviceaccount:my-namespace:internal-kubectl" cannot list resource "pods" in API group "" in the namespace "my-namespace"
-- Vipin Menon
containers
jobs
kubernetes
rbac

3 Answers

3/27/2020

Create service account like this.

apiVersion: v1
kind: ServiceAccount
metadata:
  name: internal-kubectl

Create ClusterRoleBinding using this.

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: modify-pods-to-sa
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: internal-kubectl

Now create pod with same config that are given at Documentation.

-- Sachin Arote
Source: StackOverflow

3/27/2020

When you use kubectl from the pod for any operation such as getting pod or creating roles and role bindings it will use the default service account. This service account don't have permission to perform those operations by default. So you need to

  1. create a service account, role and rolebinding using a more privileged account.You should have a kubeconfig file with admin privilege or admin like privilege. Use that kubeconfig file with kubectl from outside the pod to create the service account, role, rolebinding etc.

  2. After that is done create pod by specifying that service account and you should be able perform operations which are defined in the role from within this pod using kubectl and the service account.


apiVersion: v1
kind: Pod
metadata:
  name: my-pod
spec:
  serviceAccountName: internal-kubectl
-- Arghya Sadhu
Source: StackOverflow

3/30/2020

Is it possible to run kubectl inside a Job resource in a specified namespace? Did not see any documentation or examples for the same..

A Job creates one or more Pods and ensures that a specified number of them successfully terminate. It means the permission aspect is the same as in a normal pod, meaning that yes, it is possible to run kubectl inside a job resource.

TL;DR:

  • Your yaml file is correct, maybe there were something else in your cluster, I recommend deleting and recreating these resources and try again.
  • Also check the version of your Kubernetes installation and job image's kubectl version, if they are more than 1 minor-version apart, you may have unexpected incompatibilities

Security Considerations:

  • Your job role's scope is the best practice according to documentation (specific role, to specific user on specific namespace).
  • If you use a ClusterRoleBinding with the cluster-admin role it will work, but it's over permissioned, and not recommended since it's giving full admin control over the entire cluster.

Test Environment:

  • I deployed your config on a kubernetes 1.17.3 and run the job with bitnami/kubectl and bitnami/kubectl:1:17.3. It worked on both cases.
  • In order to avoid incompatibility, use the kubectl with matching version with your server.

Reproduction:

$ cat job-kubectl.yaml 
apiVersion: batch/v1
kind: Job
metadata:
  name: testing-stuff
  namespace: my-namespace
spec:
  template:
    metadata:
      name: testing-stuff
    spec:
      serviceAccountName: internal-kubectl
      containers:
      - name: tester
        image: bitnami/kubectl:1.17.3
        command:
         - "bin/bash"
         - "-c"
         - "kubectl get pods -n my-namespace"
      restartPolicy: Never 

$ cat job-svc-account.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:
  name: internal-kubectl  
  namespace: my-namespace   
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: modify-pods
  namespace: my-namespace
rules:
  - apiGroups: [""]
    resources: ["pods"]
    verbs: ["get", "list", "delete"]      
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: modify-pods-to-sa
  namespace: my-namespace
subjects:
  - kind: ServiceAccount
    name: internal-kubectl
roleRef:
  kind: Role
  name: modify-pods
  apiGroup: rbac.authorization.k8s.io
  • I created two pods just to add output to the log of get pods.
$ kubectl run curl --image=radial/busyboxplus:curl -i --tty --namespace my-namespace
the pod is running
$ kubectl run ubuntu --generator=run-pod/v1 --image=ubuntu -n my-namespace
pod/ubuntu created
  • Then I apply the job, ServiceAccount, Role and RoleBinding
$ kubectl get pods -n my-namespace
NAME                    READY   STATUS      RESTARTS   AGE
curl-69c656fd45-l5x2s   1/1     Running     1          88s
testing-stuff-ddpvf     0/1     Completed   0          13s
ubuntu                  0/1     Completed   3          63s
  • Now let's check the testing-stuff pod log to see if it logged the command output:
$ kubectl logs testing-stuff-ddpvf -n my-namespace
NAME                    READY   STATUS    RESTARTS   AGE
curl-69c656fd45-l5x2s   1/1     Running   1          76s
testing-stuff-ddpvf     1/1     Running   0          1s
ubuntu                  1/1     Running   3          51s

As you can see, it has succeeded running the job with the custom ServiceAccount.

Let me know if you have further questions about this case.

-- willrof
Source: StackOverflow