POD Security Policy Evaluation order (multiple roles)

4/3/2020

Running 1.15 on AWS EKS.

By default AWS provides eks.privileged PSP (documented here: https://docs.aws.amazon.com/eks/latest/userguide/pod-security-policy.html). This is assigned to all authenticated users.

I then create a more restrictive PSP eks.restricted:

---
# restricted pod security policy

apiVersion: extensions/v1beta1
kind: PodSecurityPolicy
metadata:
  creationTimestamp: null
  labels:
    kubernetes.io/cluster-service: "true"
    eks.amazonaws.com/component: pod-security-policy
  name: eks.restricted
spec:
  allowPrivilegeEscalation: false
  allowedCapabilities:
  - '*'
  fsGroup:
    rule: RunAsAny
  hostPorts:
  - max: 65535
    min: 0
  runAsUser:
    rule: MustRunAsNonRoot
  seLinux:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  volumes:
  - '*'

The above is a non-mutating PSP. I also modify the default eks.privilged PSP to make it modifying by adding the following annotations

seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default

Finally I update the clusterrole to add in the new PSP I created:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: eks:podsecuritypolicy:privileged
  labels:
    kubernetes.io/cluster-service: "true"
    eks.amazonaws.com/component: pod-security-policy
rules:
- apiGroups:
  - policy
  resourceNames:
  - eks.privileged
  - eks.restricted
  resources:
  - podsecuritypolicies
  verbs:
  - use

What this accomplishes is that eks.restricted becomes the default PSP do to the fact that it is non-mutating (https://v1-15.docs.kubernetes.io/docs/concepts/policy/pod-security-policy/#policy-order and the order of the list doesn't matter).

That is great. But what I am trying to accomplish is create a single namespace that defaults to eks.restricted while all others default to eks.privileged.

I attempted to do this as such.

First I removed eks.restricted from ClusterRole eks:podsecuritypolicy:privileged so that eks.privileged is now the cluster wide default. Within my namespace I created a new role

---

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  annotations:
  labels:
    eks.amazonaws.com/component: pod-security-policy
    kubernetes.io/cluster-service: "true"
  name: eks:podsecuritypolicy:restricted
rules:
- apiGroups:
  - policy
  resourceNames:
  - eks.restricted
  resources:
  - podsecuritypolicies
  verbs:
  - use

This Role grants use to PSP eks.restricted. I then bound this new Role to a ServiceAccount within my example namespace.

---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: psp-restricted
  namespace: psp-example
roleRef:
  kind: Role
  name: eks:podsecuritypolicy:restricted
  apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
  name: privileged-sa
  namespace: psp-example

Finally I created a deployment, which violates PSP eks.restricted that uses this serviceAccount

apiVersion: apps/v1
kind: Deployment
metadata:
  name: centos-deployment
  namespace: psp-example
  labels:
    app: centos
spec:
  replicas: 3
  selector:
    matchLabels:
      app: centos
  template:
    metadata:
      labels:
        app: centos
    spec:
      serviceAccountName: privileged-sa
      containers:
      - name: centos
        #image: centos:centos7
        image: datinc/permtest:0
        command:
          - '/bin/sleep'
          - '60000'

My assumption would be that this would function as in my initial example/test at the start of this post. My combined access is to both eks.privileged due to it being bound to system:authenticated group and eks.restricted bound to the serviceAccount my deployment is running under. Since eks.restricted is non-mutating it should be the one that applies and such it should fail to create PODs. But that isn't what happens. The PODs start up just fine.

As a further test I added eks.privileged to the SA Role (listed above) expecting it to function like in my original example. It does not, PODs create just fine.

Trying to figure out why this is.

-- user5786359
kubernetes
security

1 Answer

4/4/2020

At AWS, your deployment uses ServiceAccount replicaset-controller in namespace kube-system, so you need to remove this from ClusterRoleBinding eks:podsecuritypolicy:authenticated or delete that.

Kindly check this article for the detail : https://dev.to/anupamncsu/pod-security-policy-on-eks-mp9

-- Fauzan
Source: StackOverflow