I am trying to connect to a pod running inside a GKE cluster - I can ssh into the nodes within the cluster, but when I try the following command to get into a bash within a pod, I get an error:
kubectl --namespace=prod exec -it test-webserver-3998817321-728hj -- /bin/bash
-> Error from server: error dialing backend: ssh: rejected: connect failed (Connection timed out)
How to connect to a running pod within a gke cluster by using kubectl command? Is there something misconfigured with my firewall? I've got the following ssh rule:
NAME NETWORK DIRECTION PRIORITY ALLOW DENY
sshaccess default INGRESS 1000 tcp:22,icmp
When I try the above command on a local cluster, I can easily connect.
Sometimes it works, and sometimes it doesn't. As far as I understand, the Loadbalancer (Ingress) might be responsible for this behaviour?
I got exactly the same error message and for me also, it sometimes works and sometimes not.
In my case, this was caused by mis configuration of firewall. I restricted most of the outgoing traffic with egress except 443 port. I add a rule which allow outgoing traffic from k8s node to internal IPs(IPs in the same subnet) for every port. If your case is also caused by prohibiting egress from k8s to internal-nodes, create new firewall-rules to allow the transaction.
gcloud compute firewall-rules create <firewall-name> --network <network-name> --action allow --rules tcp --direction egress --destination-ranges <internal-ip range> --target-tags <server-ip from which transaction goes out.>