Why a pod can't connect to on-premise network

7/2/2019

I'm setting up a Kubernetes engine (cluster-version: "1.11") on GCP with Kubeflow installation script that deploy on "default" network and I setting up a Google VPN Service to on-premise network (10.198.96.0/20)

I try to connect from VMs or Kubernetes nodes from GCP to on-premise network all is fine but from Pods it cant't connect to op-premise network

  • From GKE nodes or other VMs on "default" network (10.140.0.0/20) it can be ping or curl to on-premise hosts
  • From GKE Pods it can't ping or curl to on-premise hosts

I'm looking up network configuration of pods creation is 10.24.0.0/14 and I thinks a CIDR of Pods not overlap with "default" network on GCP (10.140.0.0/20) and On-primise network (10.198.96.0/20)

Why Pods can't connect?

-- Teerapat KHUNPECH
google-kubernetes-engine
kubernetes
networking

2 Answers

7/2/2019

Apparently your pods are isolated in terms of egress traffic.

If you want to allow all traffic from all pods in a namespace (even if policies are added that cause some pods to be treated as “isolated”), you can create a policy that explicitly allows all egress traffic in that namespace.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-all
spec:
  podSelector: {}
  egress:
  - {}
  policyTypes:
  - Egress

For more details on how to manage network policies read here

-- A_Suh
Source: StackOverflow

7/3/2019

After googling about IP MASQUERADE and I'm try from this post and It's work!!!

-- Teerapat KHUNPECH
Source: StackOverflow