I have a kubernetes cluster running on azure. What is the way to access the cluster from local kubectl command. I referred to here but on the kubernetes master node there is no kube config file. Also, kubectl config view results in
apiVersion: v1
clusters: []
contexts: []
current-context: ""
kind: Config
preferences: {}
users: []
I was trying to setup kubectl on a different client from the one I created the kops cluster originally from. Not sure if this would work on Azure, but it worked on an AWS-backed (kops) cluster:
kops / kubectl - how do i import state created on a another server?
The Azure setup only exposes the ssh ports externally. This can be found under ./output/kube_xxxxxxxxxx_ssh_conf What I did is tunnel the ssh to be available on my machine by adding a ssh port tunnel. Go into the above file and under the "host *" section add another line like the bellow:
LocalForward 8080 127.0.0.1:8080
which maps my local machine port 8080 (where kubectl search for the default context) to the remote machine 8080 port where the master listen to api calls. when you open ssh to the kube-00 as regular docs shows to can now do calls from your local kubectl without any extra configuration.
For clusters that are created manually using vm's of cloud providers, just get the kubeconfig from ~/.kube/config
. However for managed services like GKE you will have to rely on gcloud to get the kubeconfig generated in the runtime with the right token.
Generally a service account can be created that will help in getting the right kubeconfig with token generated for you. Something similar can also be found in Azure.
if you have windows check you %HOME% environment variable and it should point to you user directory. Then create the folfer ".kube" in "C:/users/your_user" and within such folder create your "config" file as described by "Phagun Baya".
echo %HOME%
How did you set up your cluster? To access the cluster remotely you need a kubeconfig file (it looks like you don't have one) and the setup scripts generate a local kubeconfig file as part of the cluster deployment process (because otherwise the cluster you just deployed isn't usable). If someone else deployed the cluster, you should follow the instructions on the page you linked to to get a copy of the required client credentials to connect to the cluster.
Found a way to access remote kubernetes cluster without ssh'ing to one of the nodes in cluster. You need to edit ~/.kube/config file as below :
apiVersion: v1
clusters:
- cluster:
server: http://<master-ip>:<port>
name: test
contexts:
- context:
cluster: test
user: test
name: test
Then set context by executing:
kubectl config use-context test
After this you should be able to interact with the cluster.
Note : To add certification and key use following link : http://kubernetes.io/docs/user-guide/kubeconfig-file/
Alternately, you can also try following command
kubectl config set-cluster test-cluster --server=http://<master-ip>:<port> --api-version=v1
kubectl config use-context test-cluster
You can also define the filepath of kubeconfig by passing in --kubeconfig
parameter.
For example, copy ~/.kube/config
of the remote Kubernetes host to your local project's ~/myproject/.kube/config
. In ~/myproject
you can then list the pods of the remote Kubernetes server by running kubectl get pods --kubeconfig ./.kube/config
.
Do notice that when copying the values from the remote Kubernetes server simple kubectl config view
won't be sufficient, as it won't display the secrets of the config file. Instead, you have to do something like cat ~/.kube/config
or use scp
to get the full file contents.
See: https://kubernetes.io/docs/tasks/administer-cluster/share-configuration/
For anyone landing into this question, az
cli solves the problem.
az aks get-credentials --name MyManagedCluster --resource-group MyResourceGroup
This will merge the Azure context in your local .kube\config (in case you have a connection already set up, mine was C:\Users\[user]\.kube\config
) and switch to the Azure Kubernetes Service connection.
Locate the .kube directory on your k8s machine.
On linux/Unix it will be at /root/.kube
On windows it will be at C:/User//.kube
copy the config file from the .kube folder of the k8s cluster to .kube folder of your local machine
Copy client-certificate: /etc/cfc/conf/kubecfg.crt
client-key: /etc/cfc/conf/kubecfg.key
to .kube folder of your local machine.
Edit the config file in the .kube folder of your local machine and update the path of the kubecfg.crt and kubecfg.key on your local machine.
/etc/cfc/conf/kubecfg.crt --> C:\Users\.kube\kubecfg.crt
/etc/cfc/conf/kubecfg.key --> C:\Users\.kube\kubecfg.key
Now you should be able to interact with the cluster. Run 'kubectl get pods' and you will see the pods on the k8s cluster.