Using multiple kubernetes cluster at same time using kops

2/5/2019

I am using kops to create / manage kubernetes clusters in AWS.

I have created a pilot / intermediate instance to access all the clusters.

I have noticed that even if we create multiple SSH sessions (using same user), and if I change context to cluster-a, it gets changed to cluster-a in other session.

The problem is we need to switch context every time if we want to manage different cluster simultaneously. It is very hard to maintain context switching if more than two people are using that instance.

Question may arise that why are we using multiple clusters, the things is there are multiple streams and modules being developed on parallel and all go to testing at same time.

Is there any way where I don't have to switch context and kops/kubectl can understand cluster context automatically?

Example: If I am executing command from directory-a then it automatically understand the cluster a.k8s.local. Just thinking this, any other solution is welcomed.

Last solution is to create separate pilot instances for all the clusters, which I am trying to avoid as those instances doesn't provide much value and just increase costing.

-- Akshay
cluster-computing
devops
kops
kubectl
kubernetes

1 Answer

2/5/2019

I am using exactly the solution you are searching for: I can manage a specific cluster when I am located in a specific directory.

First of all, let me explain why you cannot work on multiple clusters at the same time even on different SSH sessions.

When you do a kubectl config use-context to switch the current context, you are actually modifiying current-context: your-context in the ~/.kube/config. So if one of your team members is switching the context, that also applies to your other team members especially if they connect to the same user.

Now, the following steps can help you workaround this issue:

  • Install direnv. This tool will allow you to set custom env vars when you are located in a directory.
  • Next to the kubeconfig files, create a .envrc file:

    path_add KUBECONFIG kubeconfig
  • Run direnv allow

  • Check the content of the KUBECONFIG env var (echo $KUBECONFIG). It should look like /path/to/dir-a/kubeconfig:/home/user/.kube/config
  • Split your current ~/.kube/config in multiple kubeconfig files located in different folders: dir-a/kubeconfig, dir-b/kubeconfig and so on. You can also go into dir-a and do a kops export kubecfg your-cluster-name.
  • Check the current context with the kubectl config view --minify
  • Go to dir-b and repeat from step 2

You could also setup other env vars in the .envrc that could help you manage these different clusters (maybe a different kops state store).

-- Quentin Revel
Source: StackOverflow