I'm setting up a Kubernetes engine (cluster-version: "1.11") on GCP with Kubeflow installation script that deploy on "default" network and I setting up a Google VPN Service to on-premise network (10.198.96.0/20)
I try to connect from VMs or Kubernetes nodes from GCP to on-premise network all is fine but from Pods it cant't connect to op-premise network
I'm looking up network configuration of pods creation is 10.24.0.0/14 and I thinks a CIDR of Pods not overlap with "default" network on GCP (10.140.0.0/20) and On-primise network (10.198.96.0/20)
Why Pods can't connect?
Apparently your pods are isolated in terms of egress traffic.
If you want to allow all traffic from all pods in a namespace (even if policies are added that cause some pods to be treated as “isolated”), you can create a policy that explicitly allows all egress traffic in that namespace.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-all
spec:
podSelector: {}
egress:
- {}
policyTypes:
- Egress
For more details on how to manage network policies read here
After googling about IP MASQUERADE and I'm try from this post and It's work!!!