I have two pods namely payroll and mysql labelled as name=payroll
and name=mysql
. There's another pod named internal with label name=internal
. I am trying to allow egress traffic from internal to other two pods while allowing all ingress traffic. My NetworkPoliy
looks like this:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: internal-policy
spec:
podSelector:
matchLabels:
name: internal
policyTypes:
- Ingress
- Egress
ingress:
- {}
egress:
- to:
- podSelector:
matchExpressions:
- {key: name, operator: In, values: [payroll, mysql]}
ports:
- protocol: TCP
port: 8080
- protocol: TCP
port: 3306
This does not match the two pods payroll and mysql. What am I doing wrong?
The following works:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: internal-policy
spec:
podSelector:
matchLabels:
name: internal
policyTypes:
- Ingress
- Egress
ingress:
- {}
egress:
- to:
- podSelector:
matchLabels:
name: payroll
- podSelector:
matchLabels:
name: mysql
ports:
- protocol: TCP
port: 8080
- protocol: TCP
port: 3306
What is the best way to write a NetWorkPolicy
and why is the first one incorrect?
I also am wondering why the to
field is an array while the podSelector
is also an array inside it? I mean they are the same right? Multiple podSelector
or multiple to
fields. Using one of them works.
This does not match the two pods payroll and mysql. What am I doing wrong?
podSelector
should be in the same level, as follows: - to:
- podSelector:
matchLabels:
name: payroll
- podSelector:
matchLabels:
name: mysql
What is the best way to write a
NetWorkPolicy
?
8080
and 3306
on BOTH pods, otherwise it would be better to create two rules, to avoid leaving unnecessary open ports.I also am wondering why the
to
field is an array while thepodSelector
is also an array inside it? I mean they are the same right? MultiplepodSelector
or multipleto
fields. Using one of them works.
From NetworkPolicySpec v1 networking API Ref:
egress
NetworkPolicyEgressRule array: List of egress rules to be applied to the selected pods. Outgoing traffic is allowed if there are no NetworkPolicies selecting the pod, OR if the traffic matches at least one egress rule across all of the NetworkPolicy objects whose podSelector matches the pod.
Also keep in mind that this list also includes the Ports Array as well.
Why is the first one incorrect?
Reproduction:
nginx
images for it's easy testing and changed ports to 80
on NetworkPolicy
. I'm calling your first yaml internal-original.yaml
and the second you posted second-internal.yaml
:$ cat internal-original.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: internal-original
spec:
podSelector:
matchLabels:
name: internal
policyTypes:
- Ingress
- Egress
ingress:
- {}
egress:
- to:
- podSelector:
matchExpressions:
- {key: name, operator: In, values: [payroll, mysql]}
ports:
- protocol: TCP
port: 80
$ cat second-internal.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: internal-policy
spec:
podSelector:
matchLabels:
name: internal
policyTypes:
- Ingress
- Egress
ingress:
- {}
egress:
- to:
- podSelector:
matchLabels:
name: payroll
- podSelector:
matchLabels:
name: mysql
ports:
- protocol: TCP
port: 80
$ kubectl run mysql --generator=run-pod/v1 --labels="name=mysql" --image=nginx
pod/mysql created
$ kubectl run internal --generator=run-pod/v1 --labels="name=internal" --image=nginx
pod/internal created
$ kubectl run payroll --generator=run-pod/v1 --labels="name=payroll" --image=nginx
pod/payroll created
$ kubectl run other --generator=run-pod/v1 --labels="name=other" --image=nginx
pod/other created
$ kubectl expose pod mysql --port=80
service/mysql exposed
$ kubectl expose pod payroll --port=80
service/payroll exposed
$ kubectl expose pod other --port=80
service/other exposed
networkpolicy
, I'll log into the internal
pod to download wget
, because after that outside access will be blocked:$ kubectl exec internal -it -- /bin/bash
root@internal:/# apt update
root@internal:/# apt install wget -y
root@internal:/# exit
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP
internal 1/1 Running 0 62s 10.244.0.192
mysql 1/1 Running 0 74s 10.244.0.141
other 1/1 Running 0 36s 10.244.0.216
payroll 1/1 Running 0 48s 10.244.0.17
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
mysql ClusterIP 10.101.209.87 <none> 80/TCP 23s
other ClusterIP 10.103.39.7 <none> 80/TCP 9s
payroll ClusterIP 10.109.102.5 <none> 80/TCP 14s
$ kubectl get networkpolicy
No resources found in default namespace.
$ kubectl apply -f internal-original.yaml
networkpolicy.networking.k8s.io/internal-original created
$ kubectl exec internal -it -- /bin/bash
root@internal:/# wget --spider --timeout=1 http://10.101.209.87
Spider mode enabled. Check if remote file exists.
--2020-06-08 18:17:55-- http://10.101.209.87/
Connecting to 10.101.209.87:80... connected.
HTTP request sent, awaiting response... 200 OK
root@internal:/# wget --spider --timeout=1 http://10.109.102.5
Spider mode enabled. Check if remote file exists.
--2020-06-08 18:18:04-- http://10.109.102.5/
Connecting to 10.109.102.5:80... connected.
HTTP request sent, awaiting response... 200 OK
root@internal:/# wget --spider --timeout=1 http://10.103.39.7
Spider mode enabled. Check if remote file exists.
--2020-06-08 18:18:08-- http://10.103.39.7/
Connecting to 10.103.39.7:80... failed: Connection timed out.
$ kubectl get networkpolicy
NAME POD-SELECTOR AGE
internal-original name=internal 96s
$ kubectl delete networkpolicy internal-original
networkpolicy.networking.k8s.io "internal-original" deleted
$ kubectl apply -f second-internal.yaml
networkpolicy.networking.k8s.io/internal-policy created
$ kubectl exec internal -it -- /bin/bash
root@internal:/# wget --spider --timeout=1 http://10.101.209.87
Spider mode enabled. Check if remote file exists.
--2020-06-08 17:18:24-- http://10.101.209.87/
Connecting to 10.101.209.87:80... connected.
HTTP request sent, awaiting response... 200 OK
root@internal:/# wget --spider --timeout=1 http://10.109.102.5
Spider mode enabled. Check if remote file exists.
--2020-06-08 17:18:30-- http://10.109.102.5/
Connecting to 10.109.102.5:80... connected.
HTTP request sent, awaiting response... 200 OK
root@internal:/# wget --spider --timeout=1 http://10.103.39.7
Spider mode enabled. Check if remote file exists.
--2020-06-08 17:18:35-- http://10.103.39.7/
Connecting to 10.103.39.7:80... failed: Connection timed out.
Note: If you wish to allow pods to resolve DNS, you can follow this guide: Allow DNS Egress Traffic
If you have any questions, let me know in the comments.