Kubectl commands cannot be executed from another VM

1/25/2022

I'm having an issue when executing the "kubectl" commands. In fact, my cluster consists of one Master and one Worker node. The kubectl commands can be executed from the Master server without having an issue. But, I also have another VM which I use that VM as a Jump server to login to the master and worker nodes. I need to execute the kubectl commands from that Jump server. I created the .kube directory, and copied the kubeconfig file from the Master node to the Jump server. And also I set the context correctly as well. But the kubectl commands hangs when executing from the Jump server and it gives a timeout error.

Below are the information.

kubectl config view
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://10.240.0.30:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED

ubuntu@ansible:~$ kubectl config use-context kubernetes-admin@kubernetes
Switched to context "kubernetes-admin@kubernetes".

ubuntu@ansible:~$ kubectl get pods
Unable to connect to the server: dial tcp 10.240.0.30:6443: i/o timeout

ubuntu@ansible:~$ kubectl config current-context
kubernetes-admin@kubernetes

Everything seems to be OK for me and wondering why kubectl commands hang when wxecuting from the Jump server.

-- Container-Man
kubectl
kubernetes

1 Answer

1/27/2022

Troubleshooted the issue by verifying whether the Jump VM can telnet to Kubernetes Master Node by executing the below.

telnet <ip-address-of-the-kubernetes-master-node> 6443

Since the error was a "Connection Timed Out" I had to add a firewall rule to Kubernetes Master node. Added a firewall rule as below. Note: In my case I'm using GCP.

gcloud compute firewall-rules create allow-kubernetes-apiserver \
  --allow tcp:22,tcp:6443,icmp \
  --network kubernetes \
  --source-ranges 0.0.0.0/0

Then I was able to telnet to the Master Node without any issue. If still you can't get connected to the Master node, change the Internal IP in the kubconfig file under .kube directory to the Public IP address of the Master node.

Then change the context using below command.

kubectl config set-context <context-name>
-- Container-Man
Source: StackOverflow