I'm trying to restrict to my openvpn to allow accessing internal infrastructure and limit it only by 'develop' namespace, so I started with simple policy that denies all egress traffic and see no effect or any feedback from cluster that it was applied, I've read all docs both official and not and didn't find a solution, here is my policy:
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: policy-openvpn
namespace: default
spec:
podSelector:
matchLabels:
app: openvpn
policyTypes:
- Egress
egress: []
I've applied network policy above with kubectl apply -f policy.yaml
command, but I don't see any effect of this policy, I'm still able to connect to anything from my openvpn pod, how to debug this and see what's wrong with my policy?
It seems like a black-box for me and what can do only is try-error method, which seems not how it should work.
How can I validate that it finds pods and applies policy to them?
I'm using latest kubernetes cluster provided by GKE
I noticed that I didn't check 'use networkpolicy' in google cloud settings and after I checked my vpn just stopped worked, but I don't know how to check it, or why vpn just allows me to connect and blocks all network requests, very strange, is there a way to debug is instead of randomly changing stuff?
GKE uses calico for implementing network policy. You need to enable network network policy for master and nodes before applying network policy. You can verify whether calico is enabled by looking for calico pods in kube-system namespace.
kubectl get pods --namespace=kube-system
For verifying the network policies you can see the following commands.
kubectl get networkpolicy
kubectl describe networkpolicy <networkpolicy-name>