How can I ensure that TCP traffic is proxied by the Envoy sidecar when using Istio on Kubernetes?

11/20/2019

Istio on Kubernetes injects an Envoy sidecar to run alongside Pods and implement a service mesh, however Istio itself cannot ensure traffic does not bypass this proxy; if that happens Istio security policy is no longer applied.

Therefore, I am trying to understand all ways in which this bypass could happen (assuming Envoy itself hasn't been compromised) and find ways to prevent them so that TCP traffic originating from a Pod's network namespace is guaranteed to have gone through Envoy (or at least is much more likely to have done):

  1. Since (at the time of writing) Envoy does not support UDP (it's nearly there), UDP traffic won't be proxied, so use NetworkPolicy to ensure only TCP traffic is allowed to/from the Pod (e.g. to avoid TCP traffic being tunnelled out via a VPN over UDP)
  2. Drop NET_ADMIN capability to prevent the Pod from reconfiguring the IPTables rules in its network namespace that capture traffic
  3. Drop NET_RAW capability to prevent the Pod from opening raw sockets and bypassing the netfilter hook points that IPTables makes use of

The only other attack vector I know of would be a kernel vulnerability - are there any others? Maybe there are other L3/4 protocols that IPTables doesn't recognise or ignores?

I understand that eBPF and Cilium could be used to enforce this interception at the socket level, but I am interested in the case of using vanilla Istio on Kubernetes.

EDIT: I am also assuming the workload does not have Kubernetes API server access

-- dippynark
envoyproxy
istio
kubernetes
kubernetes-networkpolicy
network-security

2 Answers

11/21/2019

Envoy is not designed to be used as a firewall. Service meshes that rely on it such as Istio or Cilium only consider it a bug if you can bypass the policies on the receiving end.

For example, any pod can trivially bypass any Istio or Cilium policies by terminating its own Envoy with curl localhost:15000/quitquitquit and starting a custom proxy on port 15001 that allows everything before Envoy is restarted.

You can patch up that particular hole, but since resisting such attacks is not a design goal for the service meshes, there are probably dozens of other ways to accomplish the same thing. New ways bypass these policies may also be added in subsequent releases.

If you want your security policies to be actually enforced on the end that initiates the connection and not only on the receiving end, consider using a network policy implementation for which it is a design goal, such as Calico.

-- Shnatsel
Source: StackOverflow

11/20/2019

Envoy is relatively simple to bypass and Cilium is using envoy just like Istio. So it will not be able to prevent bypassing envoy's upstream.

Both Istio and Cilium have sites listing CVE's about security vulnerabilities.

From within the control plane it is possible to affect sidecar injection or iptables rules with annotations so once someone gets access to cluster admin privileges there is no defense.

You can use calico to lock down communication so the only traffic that flows is the traffic you want to flow.

Calico also offers seamless integration with Istio to enforce network policy within the Istio service mesh.

Of course applications and services in pods should also be designed with best practices for security measures in mind.


Update:

To clarify I suggested calico for a zero trust network model. Without it You can mess with envoy from the app pod since they are on the same network as the admin interface. So locking down communication between app pod and admin interface is vital vulnerability fix.

Even without privileges to cluster-admin you can affect envoy from app pod with just curl command.

-- Piotr Malec
Source: StackOverflow