I have successfully implemented PodSecurityPolicies(PSP) in my local minikube and am having trouble porting it into GKE. My aim for now is simple --> Dont Allow pods with UID 0 or with privileged access.
My PSP is simple:
apiVersion: extensions/v1beta1
kind: PodSecurityPolicy
metadata:
name: default-psp
spec:
privileged: false
runAsUser:
rule: MustRunAsNonRoot
And Ive setup RBAC ClusterRoleBinding allowing ALL serviceaccounts to USE the PSP.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: restrict-root-clusterRole
rules:
- apiGroups: ['policy']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames:
- default-psp
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: restrict-root-clusterRoleBinding
roleRef:
kind: ClusterRole
name: restrict-root-clusterRole
apiGroup: rbac.authorization.k8s.io
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:serviceaccounts
Now I enable PSP in GKE using gcloud beta container clusters update psp-demo --enable-pod-security-policy
And then I notice that GKE create the following PSPs
$ k get psp
NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP READONLYROOTFS VOLUMES
gce.event-exporter false RunAsAny RunAsAny RunAsAny RunAsAny false hostPath,secret
gce.fluentd-gcp false RunAsAny RunAsAny RunAsAny RunAsAny false configMap,hostPath,secret
gce.persistent-volume-binder false RunAsAny RunAsAny RunAsAny RunAsAny false nfs,secret
gce.privileged true * RunAsAny RunAsAny RunAsAny RunAsAny false *
gce.unprivileged-addon false RunAsAny RunAsAny RunAsAny RunAsAny false emptyDir,configMap,secret
I then create my PSP and RBAC rules.
k get psp
NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP READONLYROOTFS VOLUMES
default-psp false RunAsAny MustRunAsNonRoot RunAsAny RunAsAny false *
gce.event-exporter false RunAsAny RunAsAny RunAsAny RunAsAny false hostPath,secret
gce.fluentd-gcp false RunAsAny RunAsAny RunAsAny RunAsAny false configMap,hostPath,secret
gce.persistent-volume-binder false RunAsAny RunAsAny RunAsAny RunAsAny false nfs,secret
gce.privileged true * RunAsAny RunAsAny RunAsAny RunAsAny false *
gce.unprivileged-addon false RunAsAny RunAsAny RunAsAny RunAsAny false emptyDir,configMap,secret
I then spin up a root user pod
apiVersion: v1
kind: Pod
metadata:
name: root-user-pod
spec:
containers:
- name: root-user-pod
image: nginx
ports:
- containerPort: 80
And it goes into Running state and looking at the annotation, I see :
apiVersion: v1
kind: Pod
metadata:
annotations:
kubernetes.io/limit-ranger: 'LimitRanger plugin set: cpu request for container
root-user-pod'
kubernetes.io/psp: gce.privileged
So clearly my default PSP is not being used.
I tried to edit the gce.privileged
PSP but GKE automatically reverts it back to default privileged
status.
Then what I did was to create a Pod in a particular namespace as a particular ServiceAccouunt. My new RBAC rules are :
apiVersion: v1
kind: ServiceAccount
metadata:
name: test-user
namespace: test
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: test
name: test-psp-role
rules:
- apiGroups: ['policy']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames:
- default-psp
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: test-psp-roleBinding
namespace: test
subjects:
- kind: ServiceAccount
name: test-user
namespace: test
roleRef:
kind: Role
name: test-psp-role
apiGroup: rbac.authorization.k8s.io
And I add serviceAccountName: test-user
to my Pod manifest and then deploy the pod in the test
namespace and it too going into Running state.
k get po -n test
NAME READY STATUS RESTARTS AGE
root-user-pod 1/1 Running 0 7s
With the annotation :
apiVersion: v1
kind: Pod
metadata:
annotations:
kubernetes.io/psp: gce.privileged
creationTimestamp: "2019-03-12T15:48:11Z"
name: root-user-pod
namespace: test
So im not sure what to do next. How can I over ride the default PSPs that GKE creates ?
TL;DR - The privileged policy is required for running privileged system pods that provide critical cluster functionality. Your user account has access to use all PodSecurityPolicies.
Use of a specific PodSecurityPolicy can be authorized in 2 different ways:
use
permission on the specific PodSecurityPolicy.use
permission on the specific PodSecurityPolicy.You can test if your user account has access to use the policy with:
kubectl auth can-i use podsecuritypolicy/${PSP_NAME}
You can test whether the pod's service account has access to a PSP with:
kubectl --as=system:serviceaccount:${NAMESPACE:-default}:${SERVICE_ACCOUNT:-default} auth can-i use podsecuritypolicy/${PSP_NAME}
If you create the pod through a deployment, replicaset, or some other indirect mechanism, then it is the controller that creates the pod, not the user. In those cases the controller should not have access to any privileged pod security policies, and you should see the behavior you want.
Alternatively, make sure the users creating unprivileged pods do not have access to the cluster-admin role binding.
There are known issues with this approach, and the Kubernetes community is working on resolving these before PodSecurityPolicy is promoted from beta to general availability.