I have created a Kubernetes Cluster on Google Cloud using GKE service.
The GCP Environment has a VPC which is connected to the on-premises network using a VPN. The GKE Cluster is created in a subnet, say subnet1, in the same VPC. The VMs in the subnet1 are able to communicate to an on-premises endpoint on its internal(private) ip address. The complete subnet's ip address range(10.189.10.128/26) is whitelisted in the on-premises firewall.
The GKE Pods use the ip addresses out of the secondary ip address assigned to them(10.189.32.0/21). I did exec in one of the pods and tried to hit the on-premise network but was not able to get a response. When i checked the network logs, i found that the source ip was Pod's IP(10.189.37.18) which was used to communicate with the on-premises endpoint(10.204.180.164). Where as I want that the Pod should use the Node's IP Address to communicate to the on-premises endpoint.
There is a deployment done for the Pods and the deployment is exposed as a ClusterIP Service. This Service is attached to a GKE Ingress.
I found IP masquerade is applied on GKE cluster, so when your pods are talking together, they are seeing their real IP but if one pod is talking to a resource on internet, the node IP is used instead.
The default configuration for this rule on GKE is : 10.0.0.0/8 So any IP in the range is considered as internal and will use the pods' IP to communicate.
Hopefully, this range can be easily changed :
apiVersion: v1 data: config: | nonMasqueradeCIDRs: - 10.149.80.0/21 <-- this IP range will now be considered as external and use nodes' IP resyncInterval: 60s kind: ConfigMap metadata: name: ip-masq-agent namespace: kube-system