Configuring PodSecurityPolicy on a kubeadm cluster

4/15/2018

I have tried to set up PodSecurityPolicy on a 1.10.1 cluster installed on ubuntu 16.04 with kubeadm, have followed the instructions at https://kubernetes.io/docs/concepts/policy/pod-security-policy/

So I altered the apiserver manifest on the master at /etc/kubernetes/manifests/kube-apiserver.yaml adding the ",PodSecurityPolicy" to the --admission-control arg

When I do this and run kubectl get pods -n kube-system the api-server is not listed, obviously I have managed to hit a running instance of the apiserver as I get a list of all the other pods in the kube-system namespace

I can see that a new docker container has been started with the PodSecurityPolicy admission controller and it is obviously serving kubectl requests

When I check the kubelet logs with journalctl -u kubelet I can see

Apr 15 18:14:23 pmcgrath-k8s-3-master kubelet[993]: E0415 18:14:23.087361 993 kubelet.go:1617] Failed creating a mirror pod for "kube-apiserver-pmcgrath-k8s-3-master_kube-system(46dbb13cd345f9fbb9e18e2229e2e dd1)": pods "kube-apiserver-pmcgrath-k8s-3-master" is forbidden: unable to validate against any pod security policy: []

I have already added a privileged PSP and created a cluster role and binding and confirmed that the PSP is working

Just not sure why the apiserver kubelet gives this error and therefore does not appear in the pod list, would have thought the kubelet creates this pod and not sure if I have to create a role binding for the apiserver, controller manager, scheduler and kube-dns

No docs indicating how to deal with this, I presume this is a chicken and egg situation, where I have to bootstrap the cluster, add some PSPs, ClusterRoles and ClusterRolebindings before I can mutate the admission-control arg for the api server

Anyone have the same issue or have any pointers on this ?

Thanks Pat

-- pmcgrath
kubernetes
kubernetes-security

2 Answers

4/18/2018

Looks like one can't just add PodSecurityPolicy to the end of the plugins list. For example, script, which brings cluster up, chooses only one security_admission from the list of options (SecurityContextDeny,PodSecurityPolicy,NodeRestriction), so they might cause conflicts when used together.

Function create_psp_policy is called after start_apiserver, so we assume you can create policies, roles, and bindings after changing api-server parameters, but some pods become Running after all necessary objects are in place.

Please see the file https://github.com/kubernetes/kubernetes/blob/master/hack/local-up-cluster.sh

starting from line 412:

function start_apiserver {
    security_admission=""
    if [[ -n "${DENY_SECURITY_CONTEXT_ADMISSION}" ]]; then
      security_admission=",SecurityContextDeny"
    fi
    if [[ -n "${PSP_ADMISSION}" ]]; then
      security_admission=",PodSecurityPolicy"
    fi
    if [[ -n "${NODE_ADMISSION}" ]]; then
      security_admission=",NodeRestriction"
    fi
    if [ "${ENABLE_POD_PRIORITY_PREEMPTION}" == true ]; then
      security_admission=",Priority"
      if [[ -n "${RUNTIME_CONFIG}" ]]; then
          RUNTIME_CONFIG+=","
      fi
      RUNTIME_CONFIG+="scheduling.k8s.io/v1alpha1=true"
    fi
    # Admission Controllers to invoke prior to persisting objects in cluster
    #
    # The order defined here dose not matter.

    ENABLE_ADMISSION_PLUGINS=Initializers,LimitRanger,ServiceAccount${security_admission},DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,PodPreset,StorageObjectInUseProtection

    <skipped>
}
<skipped>

starting from line 864:

function create_psp_policy {
    echo "Create podsecuritypolicy policies for RBAC."
    ${KUBECTL} --kubeconfig="${CERT_DIR}/admin.kubeconfig" create -f ${KUBE_ROOT}/examples/podsecuritypolicy/rbac/policies.yaml
    ${KUBECTL} --kubeconfig="${CERT_DIR}/admin.kubeconfig" create -f ${KUBE_ROOT}/examples/podsecuritypolicy/rbac/roles.yaml
    ${KUBECTL} --kubeconfig="${CERT_DIR}/admin.kubeconfig" create -f ${KUBE_ROOT}/examples/podsecuritypolicy/rbac/bindings.yaml
}

<skipped>

starting from line 986

echo "Starting services now!"
if [[ "${START_MODE}" != "kubeletonly" ]]; then
  start_etcd
  set_service_accounts
  start_apiserver
  start_controller_manager
  if [[ "${EXTERNAL_CLOUD_PROVIDER:-}" == "true" ]]; then
    start_cloud_controller_manager
  fi
  start_kubeproxy
  start_kubedns
  start_kubedashboard
fi

if [[ "${START_MODE}" != "nokubelet" ]]; then
  ## TODO remove this check if/when kubelet is supported on darwin
  # Detect the OS name/arch and display appropriate error.
    case "$(uname -s)" in
      Darwin)
        warning "kubelet is not currently supported in darwin, kubelet aborted."
        KUBELET_LOG=""
        ;;
      Linux)
        start_kubelet
        ;;
      *)
        warning "Unsupported host OS.  Must be Linux or Mac OS X, kubelet aborted."
        ;;
    esac
fi

if [[ -n "${PSP_ADMISSION}" && "${AUTHORIZATION_MODE}" = *RBAC* ]]; then
  create_psp_policy
fi

if [[ "$DEFAULT_STORAGE_CLASS" = "true" ]]; then
  create_storage_class
fi

print_success

<skipped>
-- VAS
Source: StackOverflow

9/21/2018

I have written a blog post on how I figured this stuff out, short answer was

  • On master run kubeadm init with the PodSecurityPolicy admission controller enabled
  • Add some pod security policies with RBAC config - enough to allow CNI and DNS etc. to start
    • CNI daemonsets will not start without this
  • Complete configuring the cluster adding nodes via kubeadm join
  • As you add more workloads to the cluster check if you need additional pod security policies and RBAC configuration for the same

See https://pmcgrath.net/using-pod-security-policies-with-kubeadm

-- pmcgrath
Source: StackOverflow