I would like to grant a Kubernetes service account privileges for executing kubectl --token $token get pod --all-namespaces
. I'm familiar with doing this for a single namespace but don't know how to do it for all (including new ones that may be created in the future and without granting the service account full admin privileges).
Currently I receive this error message:
Error from server (Forbidden): pods is forbidden: User "system:serviceaccount:kube-system:test" cannot list resource "pods" in API group "" at the cluster scope
What (cluster) roles and role bindings are required?
UPDATE Assigning role view
to the service with the following ClusterRoleBinding
works and is a step forward. However, I'd like to confine the service account's privileges further to the minimum required.
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: test
subjects:
- kind: ServiceAccount
name: test
namespace: kube-system
roleRef:
kind: ClusterRole
name: view
apiGroup: rbac.authorization.k8s.io
The service account's token can be extracted as follows:
secret=$(kubectl get serviceaccount test -n kube-system -o=jsonpath='{.secrets[0].name}')
token=$(kubectl get secret $secret -n kube-system -o=jsonpath='{.data.token}' | base64 --decode -)
ClustRole
& ClusterRoleBinding
are correct when you need all namespaces, just shrink down the permissions:
kind: ServiceAccount
apiVersion: v1
metadata:
name: all-ns-pod-get
namespace: your-ns
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: all-ns-pod-get
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: all-ns-pod-get
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: all-ns-pod-get
subjects:
- kind: ServiceAccount
name: all-ns-pod-get
Then all pods in a namespace your-ns
will get a k8s token automatically mounted. You can use bare kubectl or k8s sdk inside a pod without passing any secrets. Note that you don't need to pass --token
, just run the command in a pod within the namespace where you created that ServiceAccount.
Here's a good article explaining the concepts https://medium.com/@ishagirdhar/rbac-in-kubernetes-demystified-72424901fcb3
apiVersion: v1
kind: ServiceAccount
metadata:
name: test
namespace: default
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: pod-reader
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "watch"]
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: test
subjects:
- kind: ServiceAccount
name: test
namespace: default
roleRef:
kind: ClusterRole
name: pod-reader
apiGroup: rbac.authorization.k8s.io
deploy test pod from the below sample
apiVersion: v1
kind: Pod
metadata:
labels:
run: test
name: test
spec:
serviceAccountName: test
containers:
- args:
- sleep
- "10000"
image: alpine
imagePullPolicy: IfNotPresent
name: test
resources:
requests:
memory: 100Mi
kubectl exec test apk add curl
kubectl exec test -- curl -o /bin/kubectl https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kubectl
kubectl exec test -- sh -c 'chmod +x /bin/kubectl'
master $ kubectl exec test -- sh -c 'kubectl get pods --all-namespaces'
NAMESPACE NAME READY STATUS RESTARTS AGE
app1 nginx-6f858d4d45-m2w6f 1/1 Running 0 19m
app1 nginx-6f858d4d45-rdvht 1/1 Running 0 19m
app1 nginx-6f858d4d45-sqs58 1/1 Running 0 19m
app1 test 1/1 Running 0 18m
app2 nginx-6f858d4d45-6rrfl 1/1 Running 0 19m
app2 nginx-6f858d4d45-djz4b 1/1 Running 0 19m
app2 nginx-6f858d4d45-mvscr 1/1 Running 0 19m
app3 nginx-6f858d4d45-88rdt 1/1 Running 0 19m
app3 nginx-6f858d4d45-lfjx2 1/1 Running 0 19m
app3 nginx-6f858d4d45-szfdd 1/1 Running 0 19m
default test 1/1 Running 0 6m
kube-system coredns-78fcdf6894-g7l6n 1/1 Running 0 33m
kube-system coredns-78fcdf6894-r87mx 1/1 Running 0 33m
kube-system etcd-master 1/1 Running 0 32m
kube-system kube-apiserver-master 1/1 Running 0 32m
kube-system kube-controller-manager-master 1/1 Running 0 32m
kube-system kube-proxy-vnxb7 1/1 Running 0 33m
kube-system kube-proxy-vwt6z 1/1 Running 0 33m
kube-system kube-scheduler-master 1/1 Running 0 32m
kube-system weave-net-d5dk8 2/2 Running 1 33m
kube-system weave-net-qjt76 2/2 Running 1 33m