How can I write a minimal NetworkPolicy to firewall a Kubernetes application with a Service of type LoadBalancer using Calico?

6/28/2018

I have a Kubernetes cluster running Calico as the overlay and NetworkPolicy implementation configured for IP-in-IP encapsulation and I am trying to expose a simple nginx application using the following Service:

apiVersion: v1
kind: Service
metadata:
  name: nginx
  namespace: default
spec:
  type: LoadBalancer
  ports:
  - port: 80
    targetPort: 80
  selector:
    app: nginx

I am trying to write a NetworkPolicy that only allows connections via the load balancer. On a cluster without an overlay, this can be achieved by allowing connections from the CIDR used to allocate IPs to the worker instances themselves - this allows a connection to hit the Service's NodePort on a particular worker and be forwarded to one of the containers behind the Service via IPTables rules. However, when using Calico configured for IP-in-IP, connections made via the NodePort use Calico's IP-in-IP tunnel IP address as the source address for cross node communication, as shown by the ipv4IPIPTunnelAddr field on the Calico Node object here (I deduced this by observing the source IP of connections to the nginx application made via the load balancer). Therefore, my NetworkPolicy needs to allow such connections.

My question is how can I allow these types of connections without knowing the ipv4IPIPTunnelAddr values beforehand and without allowing connections from all Pods in the cluster (since the ipv4IPIPTunnelAddr values are drawn from the cluster's Pod CIDR range). If worker instances come up and die, the list of such IPs with surely change and I don't want my NetworkPolicy rules to depend on them.

  • Calico version: 3.1.1
  • Kubernetes version: 1.9.7
  • Etcd version: 3.2.17
  • Cloud provider: AWS
-- dippynark
kubernetes
kubernetes-networkpolicy
project-calico

1 Answer

6/29/2018

I’m afraid we don’t have a simple way to match the tunnel IPs dynamically right now. If possible, the best solution would be to move away from IPIP; once you remove that overlay, everything gets a lot simpler.

In case you’re wondering, we need to force the nodes to use the tunnel IP because, if you’re suing IPIP, we assume that your network doesn’t allow direct pod-to-node return traffic (since the network won’t be expecting the pod IP it may drop the packets)

-- Fasaxc
Source: StackOverflow