Minikube: Restricted PodSecurityPolicy is not restricting when trying to create a privileged container

8/14/2020

I have enabled podsecuritypolicy in minikube. By default it has created two psp - privileged and restricted.

NAME         PRIV    CAPS   SELINUX    RUNASUSER          FSGROUP     SUPGROUP    READONLYROOTFS   VOLUMES
privileged   true    *      RunAsAny   RunAsAny           RunAsAny    RunAsAny    false            *
restricted   false          RunAsAny   MustRunAsNonRoot   MustRunAs   MustRunAs   false            configMap,emptyDir,projected,secret,downwardAPI,persistentVolumeClaim

I have also created a linux user - kubexz, for which I have created ClusterRole and RoleBinding to restrict for only managing pods on kubexz namespace, and use the restricted psp.

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: only-edit
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["create", "delete", "deletecollection", "patch", "update", "get", "list", "watch"]
- apiGroups: ["policy"]
  resources: ["podsecuritypolicies"]
  resourceNames: ["restricted"]
  verbs: ["use"]
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: kubexz-rolebinding
  namespace: kubexz
subjects:
   - kind: User
     name: kubexz
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: only-edit

I have set the kubeconfig file in my kubexz user $HOME/.kube. The RBAC is working fine - From kubexz user I am only able to create and manage pod resources in the kubexz namespace as expected.

But when I post a pod manifest with securityContext.privileged: true, the restricted podsecuritypolicy is not stopping me to create that pod. I should not be able to create a pod with privilege container. But the pod is getting created. Not sure what am I missing

apiVersion: v1
kind: Pod
metadata:
  name: new-pod
spec:
  hostPID: true
  containers:
  - name: justsleep
    image: alpine
    command: ["/bin/sleep", "999999"]
    securityContext:
      privileged: true
-- hkboss
kubernetes
kubernetes-rbac
minikube
podsecuritypolicy
rbac

1 Answer

8/18/2020

I have followed PodsecurityPolicy using minikube. This work by default only while using Minikube 1.11.1 with Kubernetes 1.16.x or higher.

Note for older versions of minikube:

Older versions of minikube do not ship with the pod-security-policy addon, so the policies that addon enables must be separately applied to the cluster

What I did:

1. Start minikube with the PodSecurityPolicy admission controller and the pod-security-policy addon enabled.

minikube start --extra-config=apiserver.enable-admission-plugins=PodSecurityPolicy --addons=pod-security-policy

The pod-security-policy addon must be enabled along with the admission controller to prevent issues during bootstrap.

2. Create authenticated user:

In this regard, Kubernetes does not have objects which represent normal user accounts. Normal users cannot be added to a cluster through an API call.

Even though normal user cannot be added via an API call, but any user that presents a valid certificate signed by the cluster’s certificate authority (CA) is considered authenticated. In this configuration, Kubernetes determines the username from the common name field in the ‘subject’ of the cert (e.g., “/CN=bob”). From there, the role based access control (RBAC) sub-system would determine whether the user is authorized to perform a specific operation on a resource.

Here you can find example how you can properly prepare X509 Client Certs and configure KubeConfig file accordingly.

The most important part is to define properly the common name (CN) and the organization field (O):

openssl req -new -key DevUser.key -out DevUser.csr -subj "/CN=DevUser/O=development"

The common name (CN) of the subject will be used as username for authentication request. The organization field (O) will be used to indicate group membership of the user.

Finally I have created your configuration based on standard minikube setup and can't recreate your issue either due to hostPID: true or securityContext.privileged: true

To consider:

a). Verify if your client certificate for authentication and kubeconfig file were created/setup properly especially common name (CN) and organization field (O).

b). Make sure you are switching between proper context while performing requests on behalf of different users.

   f.e. kubectl get pods --context=NewUser-context 
-- Mark
Source: StackOverflow