How can I set my local kubectl to point to 2+ clusters created using kubeadm?

12/27/2017

I am running into a few issues when trying to get my local kubectl to point to clusters created with kubeadm:

  1. The kubectl config files generated from kubeadm use the same user name, cluster name, and context name, so I cannot simply download them and add them to $KUBECONFIG.
  2. There is no kubectl command for renaming a cluster or user.
  3. The config file generated from kubeadm has the properties client-key-data and client-certificate-data. These are not fields recognized by kubectl when creating a new user or cluster.
  4. Clusters created through kubeadm don't seem to allow access through simple username and password. It seems to require the certificate infos.

It seems like I am limited to modifying the contents of the ~/.kube/config file through string manipulation (gross), which I would like to avoid!! Does anyone have a solution for this?

-- Mike
kubeadm
kubectl
kubernetes

2 Answers

12/27/2017

At the moment, as far as I am aware, there is no tool that would automatically merge different kube config files into one, which is effectively what you need. Personally I do manipulate the .kube/config manually with a text editor. It's not that much of work in the end.

-- Radek 'Goblin' Pieczonka
Source: StackOverflow

12/27/2017

One option you have is to use different config files for your clusters. Create one file for each cluster and put them in a directory (I use ~/.kube) giving them meaningful names that help you distinguish them (you can use a cluster identifier for instance).

Then, you can set the KUBECONFIG environment variable to choose a different configuration file when you run kubectl, such as:

KUBECONFIG=/path/to/the/config/file kubectl get po

You can also create aliases in your favourite shell to avoid writing all of the above command.

alias mykube="KUBECONFIG=/path/to/the/config/file kubectl get po"
mykube get po
-- whites11
Source: StackOverflow