Configure kubectl command to access remote kubernetes cluster on azure

3/30/2016

I have a kubernetes cluster running on azure. What is the way to access the cluster from local kubectl command. I referred to here but on the kubernetes master node there is no kube config file. Also, kubectl config view results in

apiVersion: v1
clusters: []
contexts: []
current-context: ""
kind: Config
preferences: {}
users: []
-- Phagun Baya
azure
kubernetes

9 Answers

1/23/2018

I was trying to setup kubectl on a different client from the one I created the kops cluster originally from. Not sure if this would work on Azure, but it worked on an AWS-backed (kops) cluster:

kops / kubectl - how do i import state created on a another server?

-- Geremy
Source: StackOverflow

5/1/2016

The Azure setup only exposes the ssh ports externally. This can be found under ./output/kube_xxxxxxxxxx_ssh_conf What I did is tunnel the ssh to be available on my machine by adding a ssh port tunnel. Go into the above file and under the "host *" section add another line like the bellow:

LocalForward 8080 127.0.0.1:8080

which maps my local machine port 8080 (where kubectl search for the default context) to the remote machine 8080 port where the master listen to api calls. when you open ssh to the kube-00 as regular docs shows to can now do calls from your local kubectl without any extra configuration.

-- aofry
Source: StackOverflow

4/15/2019

For clusters that are created manually using vm's of cloud providers, just get the kubeconfig from ~/.kube/config. However for managed services like GKE you will have to rely on gcloud to get the kubeconfig generated in the runtime with the right token.

Generally a service account can be created that will help in getting the right kubeconfig with token generated for you. Something similar can also be found in Azure.

-- Santosh
Source: StackOverflow

12/18/2019

if you have windows check you %HOME% environment variable and it should point to you user directory. Then create the folfer ".kube" in "C:/users/your_user" and within such folder create your "config" file as described by "Phagun Baya".

echo %HOME%

-- Gabriel GarcĂ­a Garrido
Source: StackOverflow

3/31/2016

How did you set up your cluster? To access the cluster remotely you need a kubeconfig file (it looks like you don't have one) and the setup scripts generate a local kubeconfig file as part of the cluster deployment process (because otherwise the cluster you just deployed isn't usable). If someone else deployed the cluster, you should follow the instructions on the page you linked to to get a copy of the required client credentials to connect to the cluster.

-- Robert Bailey
Source: StackOverflow

4/4/2016

Found a way to access remote kubernetes cluster without ssh'ing to one of the nodes in cluster. You need to edit ~/.kube/config file as below :

apiVersion: v1 
clusters:    
- cluster:
    server: http://<master-ip>:<port>
  name: test 
contexts:
- context:
    cluster: test
    user: test
  name: test

Then set context by executing:

kubectl config use-context test

After this you should be able to interact with the cluster.

Note : To add certification and key use following link : http://kubernetes.io/docs/user-guide/kubeconfig-file/

Alternately, you can also try following command

kubectl config set-cluster test-cluster --server=http://<master-ip>:<port> --api-version=v1
kubectl config use-context test-cluster
-- Phagun Baya
Source: StackOverflow

7/24/2017

You can also define the filepath of kubeconfig by passing in --kubeconfig parameter.

For example, copy ~/.kube/config of the remote Kubernetes host to your local project's ~/myproject/.kube/config. In ~/myproject you can then list the pods of the remote Kubernetes server by running kubectl get pods --kubeconfig ./.kube/config.

Do notice that when copying the values from the remote Kubernetes server simple kubectl config view won't be sufficient, as it won't display the secrets of the config file. Instead, you have to do something like cat ~/.kube/config or use scp to get the full file contents.

See: https://kubernetes.io/docs/tasks/administer-cluster/share-configuration/

-- jhaavist
Source: StackOverflow

1/19/2020

For anyone landing into this question, az cli solves the problem.

az aks get-credentials --name MyManagedCluster --resource-group MyResourceGroup

This will merge the Azure context in your local .kube\config (in case you have a connection already set up, mine was C:\Users\[user]\.kube\config) and switch to the Azure Kubernetes Service connection.

Reference

-- pollirrata
Source: StackOverflow

3/7/2019

Locate the .kube directory on your k8s machine.
On linux/Unix it will be at /root/.kube
On windows it will be at C:/User//.kube
copy the config file from the .kube folder of the k8s cluster to .kube folder of your local machine
Copy client-certificate: /etc/cfc/conf/kubecfg.crt
client-key: /etc/cfc/conf/kubecfg.key
to .kube folder of your local machine.
Edit the config file in the .kube folder of your local machine and update the path of the kubecfg.crt and kubecfg.key on your local machine.
/etc/cfc/conf/kubecfg.crt --> C:\Users\.kube\kubecfg.crt
/etc/cfc/conf/kubecfg.key --> C:\Users\.kube\kubecfg.key
Now you should be able to interact with the cluster. Run 'kubectl get pods' and you will see the pods on the k8s cluster.

-- Gajendra D Ambi
Source: StackOverflow