What is the best way to define a rule that allows egress only to the kube-apiserver with a Network Policy?
While there's a Service
resource for the kube-apiserver, there's not Pods
, so as far as I know this can't be done with labels. With IP whitelisting, this isn't guaranteed to work across clusters. Is there any recommended practice here?
Two options comes to my mind.
1) Create NetworkPolicy
with default deny all egress
traffic with Kubernetes-apiservice
IP range exception.
Due to some restarts, service IP might be changed that is better to use IP range. In that case, you need to whitelist kubernetes service IP
and Endpoint IP
.
This docs providing example with denying all egress
traffic. You will need to modify this example to add this address as exception. It will look like this.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-egress
#optional namespace
spec:
podSelector: {}
policyTypes:
- Egress
egress
- to:
- ipBlock:
cidr: <kube-apiserver-IP-range>
- port: <svc-port>
protocol: <svc-protocol>
I would advise you to check this article.
2) Create service
with static IP which will forward traffic to kubernetes-apiserver
.
Service which would run kubectl proxy
to apiserver.
How to determine kube-apiserver
In each cluster as default you will be able to see one service. For example in kubeadm
it is kubernetes
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 15d
If you will describe it, you will get endpoint
.
$ kubectl describe svc kubernetes
Name: kubernetes
Namespace: default
Labels: component=apiserver
provider=kubernetes
Annotations: <none>
Selector: <none>
Type: ClusterIP
IP: 10.96.0.1
Port: https 443/TCP
TargetPort: 6443/TCP
Endpoints: 10.166.0.30:6443
Session Affinity: None
Events: <none>
If you will check pods from kube-system
namespace, the most of them have IP which is kubernetes service endpoint.
$ kubectl get pods -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-5644d7b6d9-7sm4d 1/1 Running 3 15d 10.32.0.3 ubus-kubeadm <none> <none>
coredns-5644d7b6d9-g42g6 1/1 Running 3 15d 10.32.0.2 ubus-kubeadm <none> <none>
etcd-ubus-kubeadm 1/1 Running 3 15d 10.166.0.30 ubus-kubeadm <none> <none>
kube-apiserver-ubus-kubeadm 1/1 Running 3 15d 10.166.0.30 ubus-kubeadm <none> <none>
kube-controller-manager-ubus-kubeadm 1/1 Running 3 15d 10.166.0.30 ubus-kubeadm <none> <none>
kube-proxy-57r9m 1/1 Running 3 15d 10.166.0.30 ubus-kubeadm <none> <none>
kube-scheduler-ubus-kubeadm 1/1 Running 3 15d 10.166.0.30 ubus-kubeadm <none> <none>
weave-net-l6b5x 2/2 Running 9 15d 10.166.0.30 ubus-kubeadm <none> <none>
You have to use the IP address of the apiserver. You cannot use labels.
To find the IP address of the apiserver run:kubectl cluster-info
Look for a line like this in the output:Kubernetes master is running at https://<ip>
This is the IP address of your apiserver IP.
The network policy should look like this (assuming the apiserver IP is 34.76.197.27):
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: egress-apiserver
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: 34.76.197.27/32
ports:
- protocol: TCP
port: 443
The policy above applies to all pods in the namespaces it is applied to.
To select specific pods, edit the podSelector section with the tags of the pods that require apiserver access:
podSelector:
matchLabels:
app: apiserver-allowed
Remember that the default egress policy is ALLOW ALL which means other pods will still have access to the apiserver.
You can change this behavior by adding a "BLOCK ALL" egress policy per namespace but remember not to block access to the DNS server and other essential services.
For more info see "Egress and DNS (Pitfall!)" in this post.
Note that in some cases there may be more than one apiservers (for scalability) in which case you will need to add all the IP addresses.