I'm getting Unable to connect to the server: dial tcp <IP> i/o timeout
when trying to run kubectl get pods
when connected to my cluster in google shell. This started out of the blue without me doing any changes to my cluster setup.
gcloud beta container clusters create tia-test-cluster \
--create-subnetwork name=my-cluster\
--enable-ip-alias \
--enable-private-nodes \
--master-ipv4-cidr <IP> \
--enable-master-authorized-networks \
--master-authorized-networks <IP> \
--no-enable-basic-auth \
--no-issue-client-certificate \
--cluster-version=1.11.2-gke.18 \
--region=europe-north1 \
--metadata disable-legacy-endpoints=true \
--enable-stackdriver-kubernetes \
--enable-autoupgrade
This is the current cluster-config. I've run gcloud container clusters get-credentials my-cluster --zone europe-north1-a --project <my project>
before doing this aswell.
I also noticed that my compute instances have lost their external IPs. In our staging environment, everything works as it should based on the same config.
Any pointers would be greatly appreciated.
From what I can see of what you've posted you've turned on master authorized networks for the network <IP>
.
If the IP address of the Google Cloud Shell ever changes that is the exact error that you would expect.
As per https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#cloud_shell: you need to update the allowed IP address.
gcloud container clusters update tia-test-cluster \
--region europe-north1 \
--enable-master-authorized-networks \
--master-authorized-networks [EXISTING_AUTH_NETS],[SHELL_IP]/32