I'm working in kube-proxy development and I'm in the stage of understanding the purpose and execution of kube-proxy.
I know that kube-proxy will add iptables rules to enable user to access the exposed pods (which is kubernetes service in iptables mode).
what makes me wonder, is the fact that those rules are added in the host node where a pod of kube-proxy is running, and it's not clear how this pod is capable of accessing those privileges on the host node.
I have took a look on the code of kubernetes with no success to find this specific part, so if you have any idea, resource, or documentation that would help me to figure this out it would be appreciated.
In my kube-proxy.yaml, there is configuration about privilege, like this:
securityContext:
privileged: true
I think this will give kube-proxy enough privilege.
apiVersion: v1
kind: Pod
metadata:
name: kube-proxy
namespace: kube-system
spec:
hostNetwork: true
containers:
- name: kube-proxy
image: gcr.io/google_containers/hyperkube:v1.0.6
command:
- /hyperkube
- proxy
- --master=http://127.0.0.1:8080
securityContext:
privileged: true
volumeMounts:
- mountPath: /etc/ssl/certs
name: ssl-certs-host
readOnly: true
volumes:
- hostPath:
path: /usr/share/ca-certificates
name: ssl-certs-host
According to Pod Security Policies document:
Privileged - determines if any container in a pod can enable privileged mode. By default a container is not allowed to access any devices on the host, but a “privileged” container is given access to all devices on the host. This allows the container nearly all the same access as processes running on the host. This is useful for containers that want to use linux capabilities like manipulating the network stack and accessing devices.
In other words, it gives the container or the pod (depending on a context) most of the root privileges.
There are many more options to control pods capabilities in the securityContext section:
Consider reading the full article for details and code snippets.