RBAC rules not working in cluster with Kubeadm

1/7/2022

In one of our customer's kubernetes cluster(v1.16.8 with kubeadm) RBAC does not work at all. We creating a ServiceAccount, read-only ClusterRole and ClusterRoleBinding with the following yamls but when we login trough dashboard or kubectl user can almost do anything in the cluster. What can cause this problem?

kind: ServiceAccount
apiVersion: v1
metadata:
  name: read-only-user
  namespace: permission-manager
secrets:
  - name: read-only-user-token-7cdx2
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: read-only-user___template-namespaced-resources___read-only___all_namespaces
  labels:
    generated_for_user: ''
subjects:
  - kind: ServiceAccount
    name: read-only-user
    namespace: permission-manager
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: template-namespaced-resources___read-only
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: template-namespaced-resources___read-only
rules:
  - verbs:
      - get
      - list
      - watch
    apiGroups:
      - '*'
    resources:
      - configmaps
      - endpoints
      - persistentvolumeclaims
      - pods
      - pods/log
      - pods/portforward
      - podtemplates
      - replicationcontrollers
      - resourcequotas
      - secrets
      - services
      - events
      - daemonsets
      - deployments
      - replicasets
      - ingresses
      - networkpolicies
      - poddisruptionbudgets

Here is the cluster's kube-apiserver.yaml file content:

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    component: kube-apiserver
    tier: control-plane
  name: kube-apiserver
  namespace: kube-system
spec:
  containers:
  - command:
    - kube-apiserver
    - --advertise-address=192.168.1.42
    - --allow-privileged=true
    - --authorization-mode=Node,RBAC
    - --client-ca-file=/etc/kubernetes/pki/ca.crt
    - --enable-admission-plugins=NodeRestriction
    - --enable-bootstrap-token-auth=true
    - --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
    - --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
    - --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
    - --etcd-servers=https://127.0.0.1:2379
    - --insecure-port=0
    - --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
    - --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
    - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
    - --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
    - --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
    - --requestheader-allowed-names=front-proxy-client
    - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
    - --requestheader-extra-headers-prefix=X-Remote-Extra-
    - --requestheader-group-headers=X-Remote-Group
    - --requestheader-username-headers=X-Remote-User
    - --secure-port=6443
    - --service-account-key-file=/etc/kubernetes/pki/sa.pub
    - --service-cluster-ip-range=10.96.0.0/12
    - --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
    - --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
    image: k8s.gcr.io/kube-apiserver:v1.16.8
    imagePullPolicy: IfNotPresent
    livenessProbe:
      failureThreshold: 8
      httpGet:
        host: 192.168.1.42
        path: /healthz
        port: 6443
        scheme: HTTPS
      initialDelaySeconds: 15
      timeoutSeconds: 15
    name: kube-apiserver
    resources:
      requests:
        cpu: 250m
    volumeMounts:
    - mountPath: /etc/ssl/certs
      name: ca-certs
      readOnly: true
    - mountPath: /etc/ca-certificates
      name: etc-ca-certificates
      readOnly: true
    - mountPath: /etc/kubernetes/pki
      name: k8s-certs
      readOnly: true
    - mountPath: /usr/local/share/ca-certificates
      name: usr-local-share-ca-certificates
      readOnly: true
    - mountPath: /usr/share/ca-certificates
      name: usr-share-ca-certificates
      readOnly: true
  hostNetwork: true
  priorityClassName: system-cluster-critical
  volumes:
  - hostPath:
      path: /etc/ssl/certs
      type: DirectoryOrCreate
    name: ca-certs
  - hostPath:
      path: /etc/ca-certificates
      type: DirectoryOrCreate
    name: etc-ca-certificates
  - hostPath:
      path: /etc/kubernetes/pki
      type: DirectoryOrCreate
    name: k8s-certs
  - hostPath:
      path: /usr/local/share/ca-certificates
      type: DirectoryOrCreate
    name: usr-local-share-ca-certificates
  - hostPath:
      path: /usr/share/ca-certificates
      type: DirectoryOrCreate
    name: usr-share-ca-certificates
status: {}
-- Zekeriya Akgül
kubectl
kubernetes
rbac
roles

1 Answer

1/7/2022

What you have defined is only control the service account. Here's a tested spec; create a yaml file with:

apiVersion: v1
kind: Namespace
metadata:
  name: test
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: test-sa
  namespace: test
---
kind: ClusterRoleBinding  # <-- REMINDER: Cluster wide and not namespace specific. Use RoleBinding for namespace specific.
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: test-role-binding
subjects:
- kind: ServiceAccount
  name: test-sa
  namespace: test
- kind: User
  name: someone
  apiGroup: rbac.authorization.k8s.io
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: test-cluster-role
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: test-cluster-role
rules:
- verbs:
  - get
  - list
  - watch
  apiGroups:
  - '*'
  resources:
  - configmaps
  - endpoints
  - persistentvolumeclaims
  - pods
  - pods/log
  - pods/portforward
  - podtemplates
  - replicationcontrollers
  - resourcequotas
  - secrets
  - services
  - events
  - daemonsets
  - deployments
  - replicasets
  - ingresses
  - networkpolicies
  - poddisruptionbudgets

Apply the above spec: kubectl apply -f <filename>.yaml

Work as expected:

enter image description here

Delete the test resources: kubectl delete -f <filename>.yaml

-- gohm&#39;c
Source: StackOverflow