I'm testing PodSecurityPolicy resource on non kube-system namespace resources.
First thing, I have ensured admission plugin PodSecurityPolicy is enabled by checking kube-apiserver process:
kube-apiserver --advertise-address=192.168.56.4 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/pki/ca.crt --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key --etcd-servers=https://127.0.0.1:2379 --insecure-port=0 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-cluster-ip-range=10.96.0.0/12 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
Created PodSecurityPolicy resource with below policies:
[root@master manifests]# kubectl get psp -o wide
NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP READONLYROOTFS VOLUMES
podsecplcy false RunAsAny RunAsAny RunAsAny RunAsAny true
Created clusterrole and clusterrolebinding as below:
[root@master manifests]# kubectl describe clusterrole non-priv-role
Name: non-priv-role
Labels: <none>
Annotations: <none>
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
podsecuritypolicies.policy [] [podsecplcy] [list get watch]
[root@master ~]# kubectl describe clusterrolebinding psprb
Name: psprb
Labels: <none>
Annotations: <none>
Role:
Kind: ClusterRole
Name: non-priv-role
Subjects:
Kind Name Namespace
---- ---- ---------
ServiceAccount default default
[root@master ~]#
Below is the pod manifest which i used to create pod:
apiVersion: v1
kind: Pod
metadata:
name: pod-privileged
spec:
containers:
- name: main
image: alpine
command: ["/bin/sleep", "999999"]
securityContext:
privileged: true
I expected that it should not allow to create privileged pod on default namespace.
Actually pod created and running fine:
[root@master ~]# kubectl get po
NAME READY STATUS RESTARTS AGE
pod-privileged 1/1 Running 0 17s
Do I Need to create users or groups and assign this clusterrolebinding to check or it will work since we have assigned this clusterrolebinding to default namespace and default service account?
Also how to check what is the current role and privileges what we have?
Please find the kubernetes version and podsecplcy yaml file details
[root@master ~]# kubectl get no
NAME STATUS ROLES AGE VERSION
master.k8s Ready master 5d1h v1.16.2
node1.k8s Ready <none> 5d v1.16.3
node2.k8s Ready <none> 4d22h v1.16.3
[root@master ~]#
[root@master ~]# kubectl cluster-info
Kubernetes master is running at https://192.168.56.4:6443
KubeDNS is running at https://192.168.56.4:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
[root@master ~]#
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"policy/v1beta1","kind":"PodSecurityPolicy","metadata":{"annotations":{},"name":"podsecplcy"},"spec":{"allowPrivilegeEscalation":false,"fsGroup":{"rule":"RunAsAny"},"hostIPC":false,"hostNetwork":false,"hostPID":false,"hostPorts":[{"max":30000,"min":10000}],"privileged":false,"readOnlyRootFilesystem":true,"runAsUser":{"rule":"RunAsAny"},"seLinux":{"rule":"RunAsAny"},"supplementalGroups":{"rule":"RunAsAny"},"volumes":["*"]}}
creationTimestamp: "2019-11-23T21:31:36Z"
name: podsecplcy
resourceVersion: "626637"
selfLink: /apis/policy/v1beta1/podsecuritypolicies/podsecplcy
uid: f3316992-0dc7-4c19-852b-57e5babc451f
spec:
allowPrivilegeEscalation: false
fsGroup:
rule: RunAsAny
hostPorts:
- max: 30000
min: 10000
readOnlyRootFilesystem: true
runAsUser:
rule: RunAsAny
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
volumes:
- '*'
You haven't prevented privilege escalation, so I will suggest that you set the directive below. allowPrivilegeEscalation: false
Here How i have validated the podSecurityPolicy podsecplcy
[root@master ~]# kubectl get psp
NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP READONLYROOTFS VOLUMES
podsecplcy false RunAsAny RunAsAny RunAsAny RunAsAny true *
Question: Even though we created podsecuritypolicy podsecplcy and added to clusterrole non-priv-role and assigned the same to clusterrolebinding psprb,we were able to create privileged pod without error.but expected it should throw error
Solution: Whenever we are trying to submit privilege pod manifest yaml,we are not mentioning as which user or which group or which service account we want to create the pod.Since,I have installed k8s cluster using kubeadm as root,whenever i'm login as root in master node ,my role is cluster-admin and i'm able to submit privilege pod manifest yaml file since my role cluster-admin has full privileges.
So now how to test it as other user or group or service account wwhich we are going to restrict to create privileged pod?
if we are is master node as cluster-admin,then we have to submit kubectl create command as below for testing podsecuritypolicy.
To check where we are able to create privileged pod as particular service account then
[root@master ~]# kubectl create -f kubia-priv-pod.yml --as=system:serviceaccount:foo:default -n foo
Error from server (Forbidden): error when creating "kubia-priv-pod.yml": pods "pod-privileged" is forbidden: unable to validate against any pod security policy: [spec.containers[0].securityContext.privileged: Invalid value: true: Privileged containers are not allowed]
[root@master ~]#
[root@master ~]# kubectl create -f kubia-priv-pod.yml --as=system:serviceaccount:default:default
Error from server (Forbidden): error when creating "kubia-priv-pod.yml": pods "pod-privileged" is forbidden: unable to validate against any pod security policy: [spec.containers[0].securityContext.privileged: Invalid value: true: Privileged containers are not allowed]
[root@master ~]#
To check where we are able to create privileged pod as a combination of service account and group then
[root@master ~]# kubectl create -f kubia-priv-pod.yml --as-group=system:authenticated --as=system:serviceaccount:default:default
Error from server (Forbidden): error when creating "kubia-priv-pod.yml": pods "pod-privileged" is forbidden: unable to validate against any pod security policy: [spec.containers[0].securityContext.privileged: Invalid value: true: Privileged containers are not allowed]
[root@master ~]#
To ensure whether we are able to create priv pod as cluster admin group then
[root@master ~]# kubectl get clusterrolebindings -o go-template='{{range .items}}{{range .subjects}}{{.kind}}-{{.name}} {{end}} {{" - "}} {{.metadata.name}} {{"\n"}}{{end}}' | grep "^Group-system:masters"
Group-system:masters - cluster-admin
[root@master ~]#
[root@master ~]# kubectl create -f kubia-priv-pod.yml --as-group=system:masters --as=system:serviceaccount:default:default
pod/pod-privileged created
[root@master ~]#
Additional Note: If we want to apply this restricted clusterrolebinding to only specific group or user or service account then we have to create as below
kubectl create clusterrolebinding psprb --clusterrole=non-priv-role --user=jaya_vkl@yahoo.co.in
kubectl create clusterrolebinding psprbgrp --clusterrole=non-priv-role --group=system:authenticated
kubectl create clusterrolebinding psprbsa --clusterrole=non-priv-role --serviceaccount=default:default