I have a daemon running on my kubernetes cluster whose purpose is to accept gRPC requests and turn those into commands for creating, deleting, and viewing pods in the k8s cluster. It runs as a service in the cluster and is deployed through helm.
The helm chart creates a service account for the daemon, "tass-daemon", and gives it a cluster role that is supposed to allow it to manipulate pods in a specific namespace, "tass-arrays".
However, I'm finding that the service account does not appear to be working, with my daemon reporting a permissions error when it tries to contact the K8S API server:
2021/03/04 21:17:48 pods is forbidden: User "system:serviceaccount:default:tass-daemon" cannot list resource "pods" in API group "" in the namespace "tass-arrays"
I confirmed that the code works if I use the default service account with a manually added clusterrole, but attempting to do the setup through the helm chart appears to not work.
However, if I compare the tass-daemon clusterrole to that of admin (which clearly has the permissions to manipulate pods in all namespaces), they appear to be identical:
[maintainer@headnode helm]$ kubectl describe clusterrole admin | grep -i pods
pods [] [] [create delete deletecollection patch update get list watch]
pods/attach [] [] [get list watch create delete deletecollection patch update]
pods/exec [] [] [get list watch create delete deletecollection patch update]
pods/portforward [] [] [get list watch create delete deletecollection patch update]
pods/proxy [] [] [get list watch create delete deletecollection patch update]
pods/log [] [] [get list watch]
pods/status [] [] [get list watch]
[maintainer@headnode helm]$ kubectl describe clusterrole tass-daemon | grep -i pods
pods/attach [] [] [create delete deletecollection patch update get list watch]
pods [] [] [create delete deletecollection patch update get list watch]
pods.apps [] [] [create delete deletecollection patch update get list watch]
pods/status [] [] [get list watch]
Based on this setup, I would expect the tass-daemon service account to have the appropriate permissions for pod management.
The following is my clusterrole.yaml from my helm chart:
{{- if .Values.rbac.create -}}
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
labels:
app: {{ template "tass-daemon.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
name: {{ template "tass-daemon.fullname" . }}
namespace: "tass-arrays"
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- create delete deletecollection patch update get list watch
- apiGroups:
- ""
resources:
- pods/attach
verbs:
- create delete deletecollection patch update get list watch
- apiGroups:
- ""
resources:
- pods/status
verbs:
- get list watch
- apiGroups:
- apps
And my clusterrolebinding.yaml:
{{- if .Values.rbac.create -}}
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
labels:
app: {{ template "tass-daemon.name" .}}
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
name: {{ template "tass-daemon.fullname" . }}
namespace: "tass-arrays"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: {{ template "tass-daemon.fullname" . }}
subjects:
- kind: ServiceAccount
name: {{ template "tass-daemon.fullname" . }}
namespace: {{ .Release.Namespace }}
{{- end -}}
If I change the roleRef name to "admin", it works, but admin is more permissive than we'd prefer.
And finally here's my serviceaccount.yaml:
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app: {{ template "tass-daemon.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
name: {{ template "tass-daemon.fullname" . }}
Clearly I'm doing something wrong, so what is the proper way to configure the clusterrole so that my daemon can manipulate pods in the "tass-arrays" namespace?
As I have mentioned in comment section there is a deprecation on apiVersion
rbac.authorization.k8s.io/v1beta1
, instead use rbac.authorization.k8s.io/v1
instead .
The API v1
is stable. You should use stable version if it possible.
Read more: rbac-kubernetes.
About problem with RBAC
, part of your ClusterRole
below rules section should looks like:
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "watch", "list"]
See: pod-rbac-forbidden.