I have a multi-tenant cluster, where multi-tenancy is achieved via namespaces. Every tenant has their own namespace. Pods from a tenant cannot talk to pods of other tenants. However, some pods in every tenant have to expose a service to the internet, using an Ingress.
This I how far I got (I am using Calico):
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: tenant1-isolate-namespace
namespace: tenant1
spec:
policyTypes:
- Ingress
podSelector: {} # Select all pods in this namespace
ingress:
- from:
- namespaceSelector:
matchLabels:
name: tenant1 # white list current namespace
Deployed for each namespace (tenant1
, tenant2
, ... ), this limits communication between pods within their namespace. However, this prevents pods from the kube-system
namespace to talk to pods in this namespace.
However, the kube-system
namespace does not have any labels by default so I can not specifically white list this namespace.
I found a (dirty) workaround for this issue by manually giving it a label:
kubectl label namespace/kube-system permission=talk-to-all
And adding the whitelist rule to the networkpolicy:
...
- from:
- namespaceSelector:
matchLabels:
permission: talk-to-all # allow namespaces that have the "talk-to-all privilege"
Is there a better solution, without manually giving kube-system
a label?
Edit: I tried to additionally add an "OR" rule to specifically allow communication from pods that have the label "app=nginx-ingress", but without luck:
- from
...
- podSelector:
matchLabels:
app: nginx-ingress # Allow pods that have the app=nginx-ingress label
apiVersion: networking.k8s.io/v1
The namespaceSelector is designed to match namespaces by labels only. There is no way to select namespace by name.
The podSelector can only select pods in the same namespace with NetworkPolicy object. For objects located in different namespaces, only selection of the whole namespace is possible.
Here is an example of Kubernetes Network Policy implementation:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: default
spec:
podSelector:
matchLabels:
role: db
policyTypes:
- Ingress
- Egress
ingress:
- from:
- ipBlock:
cidr: 172.17.0.0/16
except:
- 172.17.1.0/24
- namespaceSelector:
matchLabels:
project: myproject
- podSelector:
matchLabels:
role: frontend
ports:
- protocol: TCP
port: 6379
egress:
- to:
- ipBlock:
cidr: 10.0.0.0/24
ports:
- protocol: TCP
port: 5978
Follow this link to read a good explanation of the whole concept of Network policy, or this link to watch the lecture.
apiVersion: projectcalico.org/v3
Calico API gives you more options for writing NetworkPolicy rules, so, at some point, you can achieve your goal with less efforts and mind-breaking.
For example, using Calico implementation of Network Policy you can:
But still, you can match namespaces only by labels.
Consider reading Calico documentation for the details.
Here is an example of Calico Network Policy implementation:
apiVersion: projectcalico.org/v3
kind: NetworkPolicy
metadata:
name: allow-tcp-6379
namespace: production
spec:
selector: role == 'database'
types:
- Ingress
- Egress
ingress:
- action: Allow
protocol: TCP
source:
selector: role == 'frontend'
destination:
ports:
- 6379
egress:
- action: Allow
Indeed, tenant1
pods will need access to kube-dns
in the kube-system
namespace specifically.
One approach without requiring kube-system
namespace to be labelled is the following policy. Although kube-dns
could be in any namespace with this approach so it may not be suitable for you.
---
# Default deny all ingress & egress policy, except allow kube-dns
# All traffic except this must be explicity allowed.
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: default-deny-all-except-kube-dns
namespace: tenant1
spec:
podSelector: {}
egress:
- to:
- podSelector:
matchLabels:
k8s-app: kube-dns
- ports:
- protocol: TCP
port: 53
- protocol: UDP
port: 53
policyTypes:
- Ingress
- Egress
Then, you would need also need an 'allow all within namespace policy' as follows:
---
# Allow intra namespace traffic for development purposes only.
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-intra-namespace
namespace: tenant1
spec:
podSelector:
matchLabels:
ingress:
- from:
- podSelector: {}
egress:
- to:
- podSelector: {}
policyTypes:
- Ingress
- Egress
Lastly, you will want to add specific policies such as an ingress rule. It would be better to replace the allow-intra-namespace
policy with specific rules to suit individual pods, which your tenant1
could do.
These have been adapted from this website: https://github.com/ahmetb/kubernetes-network-policy-recipes