How to setup an audit policy into kube-apiserver?

4/6/2018

I've been reading about how to setup audit in kubernetes here which basically says that in order to enable audit I have to specify a yaml policy file to kube-apiserver when starting it up, by using the flag --audit-policy-file.

Now, there are two things I don't understand about how to achieve this:

  1. What's the proper way to add/update a startup parameter of the command that runs kube-apiserver? I cannot update the pod, so do I need to clone the pod somehow? Or should I use kops edit cluster as suggested here: https://github.com/kubernetes/kops/blob/master/docs/cluster_spec.md#kubeapiserver. Surprisingly, kubernetes does not create a deployment for this, should I create it myself?
  2. In particular to setup audit I have to pass a yaml file as a startup argument. How do I upload/make available this yaml file in order to make a --audit-policy-file=/some/path/my-audit-file.yaml. Do I create a configMap with it and/or a volume? How can I reference this file afterwards, so it's available in the filesystem when kube-apiserver startup command runs?

Thanks!

-- jotadepicas
kops
kubernetes
kubernetes-security

1 Answer

4/8/2018

What's the proper way to add/update a startup parameter of the command that runs kube-apiserver?

In 99% of the ways that I have seen kubernetes clusters deployed, the kubelet binary on the Nodes reads the kubernetes descriptors in /etc/kubernetes/manifests on the host filesystem and runs the Pods described therein. So, the answer to the first question is to edit -- or cause the configuration management tool you are using to update -- the file /etc/kubernetes/manifests/kube-apiserver.yaml (or hopefully a very similarly named file). If you have multiple master Nodes, you will need to repeat that process for all master Nodes. In most cases, the kubelet binary will see the change to the manifest file and will restart the apiserver's Pod automatically, but in the worst case restarting kubelet may be required.

Be sure to watch the output of the newly started apiserver's docker container to check for errors, and only roll that change out to the other apiserver manifest files after you have confirmed it works correctly.

How can I reference this file afterwards, so it's available in the filesystem when kube-apiserver startup command runs?

Roughly the same answer: either via ssh or any on-machine configuration management tool. The only asterisk to this one is that since the apisever's manifest file is a normal Pod declaration, one will wish to be mindful of the volume:s and volumeMount:s just like you would for any other in-cluster Pod. That is likely to be fine if your audit-policy.yaml lives in or under /etc/kubernetes, since that directory is already volume mounted into the Pod (again: most of the time). It's writing out the audit log file that will most likely require changes, since unlike the rest of the config the log file path cannot be readOnly: true and thus will at minimum require a 2nd volumeMount without the readOnly: true, and likely will require a 2nd volume: hostPath: to make the log directory visible into the Pod.

I actually haven't tried using a ConfigMap for the apiserver itself, as that's very meta. But, in a multi-master setup, I don't know that it's impossible, either. Just be cautious, because in such a self-referential setup it would be very easy to bring down all masters with a bad configuration since they wouldn't be able to communicate with themselves to read the updated config.

-- mdaniel
Source: StackOverflow