I need to create ServiceAccounts that can access a GKE cluster. Internally I do this with the following commands:
kubectl create serviceaccount onboarding --namespace kube-system
kubectl apply -f onboarding.clusterrole.yaml
kubectl create clusterrolebinding onboarding --clusterrole=onboarding --serviceaccount=kube-system:onboarding
Where the contents of the file onboarding.clusterrole.yaml
are something like this:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: onboarding
rules:
- apiGroups:
- '*'
resources:
- 'namespace,role,rolebinding,resourcequota'
verbs:
- '*'
The ServiceAccount resource is created as expected and the ClusterRole and ClusterRoleBinding also look right, but when I attempt to access the API using this new role, I get an Authentication failure.
curl -k -X GET -H "Authorization: Bearer [REDACTED]" https://36.195.83.167/api/v1/namespaces
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "namespaces is forbidden: User \"system:serviceaccount:kube-system:onboarding\" cannot list namespaces at the cluster scope: Unknown user \"system:serviceaccount:kube-system:onboarding\"",
"reason": "Forbidden",
"details": {
"kind": "namespaces"
},
"code": 403
The response suggests an unknown user, but I confirmed the ServiceAccount exists and is in the Subjects of the ClusterRoleBinding. Is it possible to define a ServiceAccount in this way for GKE?
I am using the exact process successfully on kubernetes clusters we run in our datacenters.
Can you show the output of kubectl get clusterrolebinding onboarding -o yaml
?
This might be a version mismatch since you created a rbac.authorization.k8s.io/v1beta1 ClusterRole
and kubectl create clusterrole
will create a rbac.authorization.k8s.io/v1 ClusterRoleBinding
.
You should upgrade your ClusterRole
to version rbac.authorization.k8s.io/v1
.
GKE should have the same process. Does your kubectl
version match that of the GKE cluster? Not sure if this is the issue but the ClusterRole
needs plurals for the resources and the resources are represented as lists:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: onboarding
rules:
- apiGroups:
- '*'
resources:
- namespaces
- roles
- rolebindings
- resourcequotas
verbs:
- '*'
Works for me on K8s 1.11.x:
curl -k -X GET -H "Authorization: Bearer [REDACTED]" https://127.0.0.1:6443/api/v1/namespaces
{
"kind": "NamespaceList",
"apiVersion": "v1",
"metadata": {
"selfLink": "/api/v1/namespaces",
"resourceVersion": "12345678"
},
...
I see that you are creating the service account, role and role binding to have API access to your kubernetes cluster, the only "hic" is that the resources are not well configured. check this document on how to configure rbac roles, resources-verbs, as well with their definition and examples.