I've been reading about how to setup audit in kubernetes here which basically says that in order to enable audit I have to specify a yaml policy file to kube-apiserver when starting it up, by using the flag --audit-policy-file
.
Now, there are two things I don't understand about how to achieve this:
kops edit cluster
as suggested here: https://github.com/kubernetes/kops/blob/master/docs/cluster_spec.md#kubeapiserver. Surprisingly, kubernetes does not create a deployment for this, should I create it myself?--audit-policy-file=/some/path/my-audit-file.yaml
. Do I create a configMap with it and/or a volume? How can I reference this file afterwards, so it's available in the filesystem when kube-apiserver startup command runs?Thanks!
What's the proper way to add/update a startup parameter of the command that runs kube-apiserver?
In 99% of the ways that I have seen kubernetes clusters deployed, the kubelet
binary on the Nodes reads the kubernetes descriptors in /etc/kubernetes/manifests
on the host filesystem and runs the Pods described therein. So, the answer to the first question is to edit -- or cause the configuration management tool you are using to update -- the file /etc/kubernetes/manifests/kube-apiserver.yaml
(or hopefully a very similarly named file). If you have multiple master Nodes, you will need to repeat that process for all master Nodes. In most cases, the kubelet
binary will see the change to the manifest file and will restart the apiserver's Pod automatically, but in the worst case restarting kubelet
may be required.
Be sure to watch the output of the newly started apiserver's docker container to check for errors, and only roll that change out to the other apiserver manifest files after you have confirmed it works correctly.
How can I reference this file afterwards, so it's available in the filesystem when kube-apiserver startup command runs?
Roughly the same answer: either via ssh or any on-machine configuration management tool. The only asterisk to this one is that since the apisever's manifest file is a normal Pod
declaration, one will wish to be mindful of the volume:
s and volumeMount:
s just like you would for any other in-cluster Pod
. That is likely to be fine if your audit-policy.yaml
lives in or under /etc/kubernetes
, since that directory is already volume mounted into the Pod (again: most of the time). It's writing out the audit log file that will most likely require changes, since unlike the rest of the config the log file path cannot be readOnly: true
and thus will at minimum require a 2nd volumeMount
without the readOnly: true
, and likely will require a 2nd volume: hostPath:
to make the log directory visible into the Pod.
I actually haven't tried using a ConfigMap
for the apiserver itself, as that's very meta. But, in a multi-master setup, I don't know that it's impossible, either. Just be cautious, because in such a self-referential setup it would be very easy to bring down all masters with a bad configuration since they wouldn't be able to communicate with themselves to read the updated config.