Network policy among pods

4/20/2021

My scenary is like the image below:

enter image description here

After a couple of days trying to find a way to block connections among pods based on a rule i found the Network Policy. But it's not working for me neither at Google Cloud Platform or Local Kubernetes!

My scenary is quite simple, i need a way to block connections among pods based in a rule (e.g. namespace, workload label and so on). At the first glance i tought the will work for me, but i don't know why it's not working at the Google Cloud, even when i create a cluster from the scratch with the option "Network policy" enable.

-- natanaelfonseca
istio
kubernetes
kubernetes-networkpolicy

3 Answers

4/20/2021

Network policy will allow you to do exactly what you described on picture. You can allow or block based on labels or namespaces.

It's difficult to help you when you don't explain what you exactly did and what is not working. Update your question with actual network policy YAML you created and ideally also send kubectl get pod --show-labels from the namespace with the pods.

What do you mean by 'local kubernetes' is also unclear but it depends largely on network CNI you're using as it must support network policies. For example Calico or Cilium support it. Minikube in it's default setting don't so you should follow i.e. this guide: https://medium.com/@atsvetkov906090/enable-network-policy-on-minikube-f7e250f09a14

-- Ondrej Bardon
Source: StackOverflow

4/20/2021

You can use Istio Sidecar to solve this : https://istio.io/latest/docs/reference/config/networking/sidecar/

Another Istio solution is the usage of AuthorizationPolicy : https://istio.io/latest/docs/reference/config/security/authorization-policy/

-- Peter Claes
Source: StackOverflow

4/23/2021

Just to update because I was involved in the problem of this post. The problem was with pods that had the istio injected. In this case, all pods in the namespace, because it had istio-injection=enabled. The NetworkPolicy rule was not taking effect when the selection was made by a matchselector, egress or ingress, and the pods involved were already running before NetworkPolicy was created. By killing the pod and then starting it, the pods that had the label match had access normally. I don't know if there is a way to say to refresh the sidecar inside the pods without having to restart it. Pods started after the creation of NetworkPolicy did not give the problem of this post.

-- Erick André
Source: StackOverflow