Kubernetes: My PodSecurityPolicy is not working or misconfigured

9/23/2019

I'm trying to restrict all pods except few from running with the privileged mode.

So I created two Pod security policies: One allowing running privileged containers and one for restricting privileged containers.

[root@master01 vagrant]# kubectl get psp
NAME         PRIV    CAPS   SELINUX    RUNASUSER   FSGROUP    SUPGROUP   READONLYROOTFS   VOLUMES
privileged   true           RunAsAny   RunAsAny    RunAsAny   RunAsAny   false            *
restricted   false          RunAsAny   RunAsAny    RunAsAny   RunAsAny   false            *

Created the Cluster role that can use the pod security policy "restricted" and binded that role to all the serviceaccounts in the cluster

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: psp-restricted
rules:
- apiGroups: ['policy']
  resources: ['podsecuritypolicies']
  verbs:     ['use']
  resourceNames:
  - restricted
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: psp-restricted
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: psp-restricted
subjects:
- kind: Group
  name: system:serviceaccounts
  apiGroup: rbac.authorization.k8s.io

Now I deploying a pod with "privileged" mode. But it is getting deployed. The created pod annotation indicating that the psp "privileged" was validated during the pod creation time. Why is that ? the restricted PSP should be validated right ?

apiVersion: v1
kind: Pod
metadata:
  name: psp-test-pod
  namespace: default
spec:
  serviceAccountName: default
  containers:
    - name: pause
      image: k8s.gcr.io/pause
      securityContext:
        privileged: true
[root@master01 vagrant]# kubectl create -f pod.yaml
pod/psp-test-pod created
[root@master01 vagrant]# kubectl get pod psp-test-pod -o yaml |grep kubernetes.io/psp
    kubernetes.io/psp: privileged

Kubernetes version: v1.14.5

Am i missing something here ? Any help is appreciated.

-- Jyothish Kumar S
kubernetes
kubernetes-pod

1 Answer

9/25/2019

Posting the answer to my own question. Hope it will help someone

All my PodSecurityPolicy configurations are correct. The issue was, I tried to deploy a pod by its own not via any controller manager like Deployment/Replicaset/Daemonset etc.. Most Kubernetes pods are not created directly by users. Instead, they are typically created indirectly as part of a Deployment, ReplicaSet or other templated controller via the controller manager.

In the case of a pod deployed by its own, pod is created by kubectl not by controller manager.

In Kubernetes there is one superuser role named "cluster-admin". In my case, kubectl is running with superuser role "cluster-admin". This "cluster-admin" role has access to all the pod security policies. Because, to associate a pod security policy to a role, we need to use 'use' verbs and set 'resources' to 'podsecuritypolicy' in 'apiGroups'

In the cluster-admin role 'resources' * include 'podsecuritypolicies' and 'verbs' * include 'use'. So all policies will also enforce on the cluster-admin role as well.

[root@master01 vagrant]# kubectl get clusterrole cluster-admin -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: cluster-admin
rules:
- apiGroups:
  - '*'
  resources:
  - '*'
  verbs:
  - '*'
- nonResourceURLs:
  - '*'
  verbs:
  - '*'

pod.yaml

apiVersion: v1
kind: Pod
metadata:
  name: psp-test-pod
  namespace: default
spec:
  serviceAccountName: default
  containers:
    - name: pause
      image: k8s.gcr.io/pause
      securityContext:
        privileged: true

I deployed the above pod.yaml using the command kubectl create -f pod.yaml Since I had created two pod security policies one for restriction and one for privileges, cluster-admin role has access to both policies. So the above pod will launch fine with kubectl because cluster-admin role has access to the privileged policy( privileged: false also works because admin role has access to restriction policy as well). This situation happens only if either a pod created directly by kubectl not by kube control managers or a pod has access to the "cluster-admin" role via serviceaccount

In the case of a pod created by Deployment/Replicaset etc..first kubectl pass the control to the controller manager, then the controller will try to deploy the pod after validating the permissions(serviceaccount, podsecuritypolicies)

In the below Deployment file, pod is trying to run with privileged mode. In my case, this deployment will fail because I already set the "restricted" policy as the default policy for all the serviceaccounts in the cluster. So no pod will able to run with privileged mode. If a pod needs to run with privileged mode, allow the serviceaccount of that pod to use the "privileged" policy

apiVersion: apps/v1
kind: Deployment
metadata:
  name: pause-deploy-privileged
  namespace: kube-system
  labels:
    app: pause
spec:
  replicas: 1
  selector:
    matchLabels:
      app: pause
  template:
    metadata:
      labels:
        app: pause
    spec:
      serviceAccountName: default
      containers:
      - name: pause
        image: k8s.gcr.io/pause
        securityContext:
          privileged: true
-- Jyothish Kumar S
Source: StackOverflow