Service account not respecting its cluster role

11/28/2018

I created a context with a user that has only READ access but when I logged in as this user I can still do whatever I want like deploying and killing pods, etc... Why is that ?


I followed this tutorial.

1) First I created a service account:
kubectl create sa myserviceaccount

2) Now I wan a role with the minimum permission (just READ) so I'll take one from the kube-system called "view"

 $ kubectl describe clusterrole view
  Resources                                Non-Resource URLs  Resource Names  Verbs
  ---------                                -----------------  --------------  -----
  bindings                                 []                 []              [get list watch]
  configmaps                               []                 []              [get list watch]
  [...]

3) Now I must create a clusterRoleBinding to bind the service account to the role "view"

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: crbmyserviceaccount
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: view
subjects:
- kind: ServiceAccount
  name: myserviceaccount
  namespace: default

4) Now we must find the associated secret name

kubectl get secrets -> myserviceaccount-token-bmwwd

5) Save the displayed token somewhere (to be used later)

kubectl describe secret myserviceaccount-token-xxxxx

Now that we have everything we need we can go on a kubernetes client and create the context.

6) Configuring the cluster in kubeconfig : kubectl config set-cluster myawesomecluster --server=IP-OF-MY-CLUSTER

7) Creating the credentials:

kubectl config set-credentials myawesomecluster-myserviceaccount --token=TOKEN-FROM-STEP-5

8) Creating the context

kubectl config set-context myawesomecluster --cluster=myawesomecluster --user=myawesomecluster-myserviceaccount --namespace=default
kubectl config use-context myawesomecluster

Taaddaaaa !

Now that the context is set I should be able to READ every ressources but not create any. Unfortunatly I can still make deployments using kubectl or even delete pods etc

This should return me an access denied: kubectl create -f someFileWithDeployment

What am I doing wrong ?
Thx !


Edit - Adding output of namespaces and config view for debugging purpose :

$kubectl get sa
NAME               SECRETS   AGE
api-explorer       1         39h
default            1         5d22h
myserviceaccount   1         17h


$kubectl config view
apiVersion: v1
clusters:
- cluster:
    certificate-authority: /home/xxxx/rbac/accountTest/api-            explorer/context/team-a-decoded.crt
    server: http://127.0.0.1:8080
  name: cfc
- cluster:
    server: http://127.0.0.1:8080
  name: myawesomecluster
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: http://localhost:8080
  name: test-cluster
contexts:
- context:
    cluster: cfc
    user: user
  name: cfc
- context:
    cluster: ""
    user: ""
  name: default
- context:
    cluster: myawesomecluster
    namespace: default
    user: myawesomecluster-myserviceaccount
  name: myawesomecluster
current-context: myawesomecluster
kind: Config
preferences: {}
users:
- name: api-explorer
  user:
    token: ZXlKaGJHY2l[...]
- name: myawesomecluster-myserviceaccount
  user:
    token: eyJhbGci [...]
- name: user
  user:
    token: ZXlKaGJH

Edit 2 : Showing output of get pod kube-apiserver-nodemaster1

$kubectl get pod kube-apiserver-nodemaster1 -n kube-system -o yaml

apiVersion: v1
kind: Pod
metadata:
  annotations:
    kubernetes.io/config.hash: 034c3[...]
    kubernetes.io/config.mirror: 034b3[...]
    kubernetes.io/config.seen: 2018-11-23T09:48:59.766423346Z
    kubernetes.io/config.source: file
    scheduler.alpha.kubernetes.io/critical-pod: ""
  creationTimestamp: 2018-11-23T09:50:29Z
  labels:
    component: kube-apiserver
    tier: control-plane
  name: kube-apiserver-nodemaster1
  namespace: kube-system
  resourceVersion: "804213"
  selfLink: /api/v1/namespaces/kube-system/pods/kube-apiserver-nodemaster1
  uid: 36340f[...]
spec:
  containers:
  - command:
    - kube-apiserver
    - --allow-privileged=true
    - --apiserver-count=3
    - --authorization-mode=Node,RBAC
    - --bind-address=0.0.0.0
    - --endpoint-reconciler-type=lease
    - --insecure-bind-address=127.0.0.1
    - --insecure-port=8080
    - --kubelet-preferred-address-types=InternalDNS,InternalIP,Hostname,ExternalDNS,ExternalIP
    - --runtime-config=admissionregistration.k8s.io/v1alpha1
    - --service-node-port-range=30000-32767
    - --storage-backend=etcd3
    - --advertise-address=10.10.10.101
    - --client-ca-file=/etc/kubernetes/ssl/ca.crt
    - --enable-admission-plugins=NodeRestriction
    - --enable-bootstrap-token-auth=true
    - --etcd-cafile=/etc/kubernetes/ssl/etcd/ca.pem
    - --etcd-certfile=/etc/kubernetes/ssl/etcd/node-nodemaster1.pem
    - --etcd-keyfile=/etc/kubernetes/ssl/etcd/node-nodemaster1-key.pem
    - --etcd-servers=https://10.10.10.101:2379,https://10.10.10.102:2379,https://10.10.10.103:2379
    - --kubelet-client-certificate=/etc/kubernetes/ssl/apiserver-kubelet-client.crt
    - --kubelet-client-key=/etc/kubernetes/ssl/apiserver-kubelet-client.key
    - --proxy-client-cert-file=/etc/kubernetes/ssl/front-proxy-client.crt
    - --proxy-client-key-file=/etc/kubernetes/ssl/front-proxy-client.key
    - --requestheader-allowed-names=front-proxy-client
    - --requestheader-client-ca-file=/etc/kubernetes/ssl/front-proxy-ca.crt
    - --requestheader-extra-headers-prefix=X-Remote-Extra-
    - --requestheader-group-headers=X-Remote-Group
    - --requestheader-username-headers=X-Remote-User
    - --secure-port=6443
    - --service-account-key-file=/etc/kubernetes/ssl/sa.pub
    - --service-cluster-ip-range=10.233.0.0/18
    - --tls-cert-file=/etc/kubernetes/ssl/apiserver.crt
    - --tls-private-key-file=/etc/kubernetes/ssl/apiserver.key
    image: gcr.io/google-containers/kube-apiserver:v1.12.2
    imagePullPolicy: IfNotPresent
    livenessProbe:
      failureThreshold: 8
      httpGet:
        host: 10.10.10.101
        path: /healthz
        port: 6443
        scheme: HTTPS
      initialDelaySeconds: 15
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 15
    name: kube-apiserver
    resources:
      requests:
        cpu: 250m
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /etc/kubernetes/ssl
      name: k8s-certs
      readOnly: true
    - mountPath: /etc/ssl/certs
      name: ca-certs
      readOnly: true
    - mountPath: /etc/pki
      name: etc-pki
      readOnly: true
  dnsPolicy: ClusterFirst
  hostNetwork: true
  nodeName: nodemaster1
  priority: 2000000000
  priorityClassName: system-cluster-critical
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext: {}
  terminationGracePeriodSeconds: 30
  tolerations:
  - effect: NoExecute
    operator: Exists
  volumes:
  - hostPath:
      path: /etc/kubernetes/ssl
      type: DirectoryOrCreate
    name: k8s-certs
  - hostPath:
      path: /etc/ssl/certs
      type: DirectoryOrCreate
    name: ca-certs
  - hostPath:
      path: /etc/pki
      type: DirectoryOrCreate
    name: etc-pki
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: 2018-11-23T09:55:05Z
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: 2018-11-23T09:55:05Z
    status: "True"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: 2018-11-23T09:55:05Z
    status: "True"
    type: ContainersReady
  - lastProbeTime: null
    lastTransitionTime: 2018-11-23T09:55:05Z
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: docker://8287[...]
    image: gcr.io/google-containers/kube-apiserver:v1.12.2
    imageID: docker-pullable://gcr.io/google-containers/kube-apiserver@sha256:0949[...]
    lastState:
      terminated:
        containerID: docker://e97[...]
        exitCode: 0
        finishedAt: 2018-11-27T14:18:24Z
        reason: Completed
        startedAt: 2018-11-23T09:49:00Z
    name: kube-apiserver
    ready: true
    restartCount: 1
    state:
      running:
        startedAt: 2018-11-27T14:18:24Z
  hostIP: 10.10.10.101
  phase: Running
  podIP: 10.10.10.101
  qosClass: Burstable
  startTime: 2018-11-23T09:55:05Z
-- Doctor
kube-apiserver
kubectl
kubernetes
rbac

2 Answers

12/4/2018

As @mk_sta explained I had to use the HTTPS endpoint and not the default http one. Http endpoint is considered for deprecation and will probably be removed as explained here

In order to go through the https endpoint replace step 6 :

kubectl config set-cluster myawesomecluster --server=https://127.0.0.1:6443

Now you will probably have an SLL error.
What I did is change my kubeconfig file (in ~/.kube/config).
I added the key "certificate-authority" with the value of the path where kubernetes stores its master certificate.
The final cluster config looks like this now :

- cluster:
    certificate-authority: /etc/kubernetes/ssl/ca.crt
    server: https://127.0.0.1:6443
  name: secureRemote
-- Doctor
Source: StackOverflow

12/3/2018

I assume when you use --insecure-bind-address=127.0.0.1 and --insecure-port=8080 flags in kube-apiserver configuration, all requests to Kubernetes API server bypass RBAC module authorization, as described in official Kubernetes documentation:

Localhost Port:

  • is intended for testing and bootstrap, and for other components of the master node (scheduler, controller-manager) to talk to the API
  • no TLS
  • default is port 8080, change with --insecure-port flag.
  • default IP is localhost, change with --insecure-bind-address flag.
  • request bypasses authentication and authorization modules.
  • request handled by admission control module(s).
  • protected by need to have host access
-- mk_sta
Source: StackOverflow