Setup:
gke: 1.13.7-gke.8
istio: 1.1.7-gke.0 with ingress and egress gateways
Istio mTLS: Strict
I have 2 namespaces:
What I have:
development
kubernetes.default.svc.cluster.local
Without policy I can access kubernetes.default.svc.cluster.local
/app # curl -I kubernetes.default.svc.cluster.local:443
curl: (8) Weird server reply
/app # curl -I example.com
HTTP/1.1 200 OK
Default
namespace labeled default-namespace=true
My policy:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: egress-policy
namespace: development
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- namespaceSelector:
matchLabels:
kube-system: 'true'
ports:
- protocol: UDP
port: 53
- to:
- namespaceSelector:
matchLabels:
istio: system
- to:
- namespaceSelector:
matchLabels:
environment: development
- to:
- namespaceSelector:
matchLabels:
default-namespace: 'true'
After this policy is applied:
/app # curl -I kubernetes.default.svc.cluster.local:443
curl: (56) Recv failure: Connection reset by peer
/app # curl -I example.com
HTTP/1.1 200 OK
Other service from default
namespace are reachable.
How to make kubernetes.default.svc.cluster.local work with istio egress and restrictive egress network policy?
Added:
kubectl get ns --show-labels
NAME STATUS AGE LABELS
default Active 42d default-namespace=true,istio-injection=enabled
development Active 2d2h environment=development,istio-injection=enabled
istio-system Active 2d2h addonmanager.kubernetes.io/mode=Reconcile,istio-injection=disabled,istio=system,k8s-app=istio
kube-public Active 42d <none>
kube-system Active 42d kube-system=true
Set Istio mTLS (beta)
to Permissive
did not help