I created a Kubernetes Cluster using kubadm and the private IP of the server so all the nodes could reach it withing the cloudprovider network. I am using 4 nodes in DigitalOcean.
kubctl-s-2vcpu-4gb-nyc3-01-master:~# kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: REDACTED
server: https://10.132.113.68:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
The command I used to initialize the cluster is:
kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=10.132.113.68 --kubernetes-version stable-1.8
I am trying to connect to this cluster using kubectl from my local computer. The admin.conf file has the private IP:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS********S0tLQo=
server: https://10.132.113.68:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin@kubernetes
I have tried setting up the proxy in the master kubectl proxy
and making an SSH tunnel to the server:
ssh -L 8001:127.0.0.1:8001 -N -i test.pem root@104.236.XX.209
I can login into the Kubernetes Dashboard from my computer, but can't execute kubectl
commands:
$kubectl -s localhost:8001 get nodes
Unable to connect to the server: read tcp 127.0.0.1:62394->127.0.0.1:8001: read: connection reset by peer
Where ssh -L ...
ends, sshuttle
starts :): it creates local tcp "catch-all" DNATing via the ssh dest node, ie will forward every tcp connection in the specified CIDR.
Try it out:
In one terminal (to ease later ^C):
sshuttle -e 'ssh -vi test.pem' -r root@104.236.XX.209 10.132.113.68/32
From other terminal, just do the kubectl ...
as you would do if locally run from your initial kubeadm
node.
Profit :)
--jjo