Container in GKE can't ping compute instance on the same network

7/9/2018

I have created a new cluster in GKE with version 1.10.5-gke.0. I see that my applications cannot reach IPs in the same network, basically instances running on compute.

I have ssh'd to one of the Kubernetes nodes, and by using the toolbox included i can ping those IP addresses, but I can't if I try from a container running on this cluster.

I saw that since 1.10 google disables access scopes for compute & storage, and even if I enable those scopes I still get the same.

I find it a bit puzzling, as this used to work for all other clusters in the past without any extra config needed

Am I missing something here?

-- Apostolos Samatas
google-cloud-platform
google-kubernetes-engine
kubernetes

2 Answers

5/6/2019

An easy way of doing fixing this is using the Google Cloud Console.

Go to

Navigation Menu -> VPC network -> Firewall rules

.

Normally when a cluster is created, a number of rules are created automatically with certain prefixes and suffixes. Look in the table of rules with a gke- prefix and an -all suffix e.g. gke-[my_cluster_name]-all. You'll notice for this rule, it has the source ranges for your pods within the cluster and quite a few protocols (tcp, udp, imp, esp, etc.) allowed.

Select this rule and go to Edit. Under Targets, select the drop down and change to All instances in the network.

Alternatively, you can choose specified Specified target tags or Specified service account, inputing the correct values below, like your correct developer service account for the compute engine you're trying to reach.

You can also look here if you're Kubernetes is version 1.9.x and later for another alternative way. Troubleshooting

Hope all this helps.

-- iAmcR
Source: StackOverflow

9/4/2018

I also ran into this issue. I have mongo running on a VM on the default network, and couldn't reach it from inside pods after I recreated my kubernetes cluster on a new node that was also on the default network.

Adding this firewall rule fixed the issue:

NAME                                              NETWORK                DIRECTION  PRIORITY  SRC_RANGES                                                                                                                                                                                                                                   DEST_RANGES  ALLOW                         DENY  SRC_TAGS  SRC_SVC_ACCT  TARGET_TAGS                                        TARGET_SVC_ACCT
gke-seqr-cluster-dev-eb823c8e-all                 default                INGRESS    1000      10.48.0.0/24                                                                                                                                                                                                                                              tcp,udp,icmp,esp,ah,sctp

Here, the 10.48.0.0 subnet is based on the cbr0 network (looked up by ssh'ing into the kubernetes node and running ip address)

cbr0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1460 qdisc htb state UP group default qlen 1000
   ..
    inet 10.48.0.1/24 scope global cbr0
       valid_lft forever preferred_lft forever
   ..

Another way to get the 10.48.0.1 ip is to install and run traceroute inside a pod:

traceroute <ip of node you're trying to reach>
-- user553965
Source: StackOverflow