Kubernetes network policy egress ports

8/28/2019

I have the following network policy for restricting access to a frontend service page:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  namespace: namespace-a
  name: allow-frontend-access-from-external-ip
spec:
  podSelector:
    matchLabels:
      app: frontend-service
  ingress:
    - from:
        - ipBlock:
            cidr: 0.0.0.0/0
        ports:
          - protocol: TCP
            port: 443
  egress:
    - to:
        - ipBlock:
            cidr: 0.0.0.0/0
        ports:
          - protocol: TCP
            port: 443

My question is: can I enforce HTTPS with my egress rule (port restriction on 443) and if so, how does this work? Assuming a client connects to the frontend-service, the client chooses a random port on his machine for this connection, how does Kubernetes know about that port, or is there a kind of port mapping in the cluster so the traffic back to the client is on port 443 and gets mapped back to the clients original port when leaving the cluster?

-- ItFreak
kubernetes
kubernetes-networkpolicy

1 Answer

9/20/2019

You might have a wrong understanding of the network policy(NP).

This is how you should interpret this section:

egress:
    - to:
        - ipBlock:
            cidr: 0.0.0.0/0
        ports:
          - protocol: TCP
            port: 443

Open port 443 for outgoing traffic for all pods within 0.0.0.0/0 cidr.

The thing you are asking

how does Kubernetes know about that port, or is there a kind of port mapping in the cluster so the traffic back to the client is on port 443 and gets mapped back to the clients original port when leaving the cluster?

is managed by kube-proxy in following way:

For the traffic that goes from pod to external addresses, Kubernetes simply uses SNAT. What it does is replace the pod’s internal source IP:port with the host’s IP:port. When the return packet comes back to the host, it rewrites the pod’s IP:port as the destination and sends it back to the original pod. The whole process is transparent to the original pod, who doesn’t know the address translation at all.

Take a look at Kubernetes networking basics for a better understanding.

-- A_Suh
Source: StackOverflow