I want to be able to use kubectl
commands on my master ec2 instance from my local machine without ssh. I tried copying .kube
into my local but the problem is that my kubeconfig
is using the private network and so when i try to run kubectl
from my local I can not connect.
Here is what I tried:
user@somehost:~/aws$ scp -r -i some-key.pem ubuntu@some.ip.0.0:.kube/ .
user@somehost:~/aws$ cp -r .kube $HOME/
user@somehost:~/aws$ kubectl version
and I got:
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.7", GitCommit:"1dd5338295409edcfff11505e7bb246f0d325d15", GitTreeState:"clean", BuildDate:"2021-01-13T13:23:52Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
Unable to connect to the server: dial tcp some.other.ip.0:6443: i/o timeout
Is there a way to change the kubeconfig
in a way that would tell my kubectl
when I run commands from local to run them on the master on the ec2 instance?
You have to change clusters.cluster.server
key in your kubectl config with externally accessible IP.
For this your VM with master node must have external IP assigned.
Depending on how you provisioned your cluster, you may need to add additional name to Kubernetes API server certificate
With kubeadm
you can just reset cluster with
kubeadm reset
on all nodes (including master), and then
kubeadm init --apiserver-cert-extra-sans=<master external IP>
Alternatively you can issue your commands with --insecure-skip-tls-verify
flag. E.g.
kubectl --insecure-skip-tls-verify get pods