Access Kubertnetes Cluster From a Remote Host present is a Different Network

2/18/2018

I have deployed a 2 node 1 master k8s cluster on google cloud with the help of kubeadm.

root@ubuntu-vm-1404:~/ansible/Kubernetes# kubectl --kubeconfig kubernetes.conf get nodes
NAME STATUS ROLES AGE VERSION
kubernetes-node1 Ready 1h v1.9.3
kubernetes-node2 Ready 1h v1.9.3
master-kubernetes Ready master 1h v1.9.3

[sujeetkp@master-kubernetes ~]$ kubectl version
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.3", GitCommit:"d2835416544f298c919e2ead3be3d0864b52323b", GitTreeState:"clean", BuildDate:"2018-02-07T12:22:21Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was

Can somebody please help me how can I access the cluster from my local machine or from a remote host present in a different network.

root@ubuntu-vm-1404:~/ansible/Kubernetes# kubectl --kubeconfig kubernetes.conf config view
apiVersion: v1
clusters:

cluster:
certificate-authority-data: REDACTED
server: https://10.142.0.3:6443
name: kubernetes
contexts:
context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
name: kubernetes-admin
user:
client-certificate-data: REDACTED
client-key-data: REDACTED

In the config file Private IP is mentioned as "server: https://10.142.0.3:6443". So I doubt I can access it from a different network.

I have followed the below Document.

https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/

The commands that I have executed are

kubeadm init --pod-network-cidr=10.244.0.0/16

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml

kubeadm join --token b9cd48.c4b0d860b9b530f7 10.142.0.3:6443 --discovery-token-ca-cert-hash sha256:5c15e951dcca92f5877cd2dab8a4383accadedc37233b68d8c33451768dc03e3
-- Sujeet Padhi
kubeadm
kubernetes

1 Answer

2/19/2018

You need kubectl installed in your remote host and copy the conf file as mentioned in that document. Use kubectl proxy for port forwarding from cluster to the instance where kubernetes is running. Once done make sure to modify the clusterIp in conf as the public Ip. The same document as you refer has the details

-- Jeel
Source: StackOverflow