I am using kops to create / manage kubernetes clusters in AWS.
I have created a pilot / intermediate instance to access all the clusters.
I have noticed that even if we create multiple SSH sessions (using same user), and if I change context to cluster-a, it gets changed to cluster-a in other session.
The problem is we need to switch context every time if we want to manage different cluster simultaneously. It is very hard to maintain context switching if more than two people are using that instance.
Question may arise that why are we using multiple clusters, the things is there are multiple streams and modules being developed on parallel and all go to testing at same time.
Is there any way where I don't have to switch context and kops
/kubectl
can understand cluster context automatically?
Example: If I am executing command from directory-a
then it automatically understand the cluster a.k8s.local
. Just thinking this, any other solution is welcomed.
Last solution is to create separate pilot instances for all the clusters, which I am trying to avoid as those instances doesn't provide much value and just increase costing.
I am using exactly the solution you are searching for: I can manage a specific cluster when I am located in a specific directory.
First of all, let me explain why you cannot work on multiple clusters at the same time even on different SSH sessions.
When you do a kubectl config use-context
to switch the current context, you are actually modifiying current-context: your-context
in the ~/.kube/config
. So if one of your team members is switching the context, that also applies to your other team members especially if they connect to the same user.
Now, the following steps can help you workaround this issue:
Next to the kubeconfig files, create a .envrc
file:
path_add KUBECONFIG kubeconfig
Run direnv allow
KUBECONFIG
env var (echo $KUBECONFIG
). It should look like /path/to/dir-a/kubeconfig:/home/user/.kube/config
~/.kube/config
in multiple kubeconfig
files located in different folders: dir-a/kubeconfig
, dir-b/kubeconfig
and so on. You can also go into dir-a and do a kops export kubecfg your-cluster-name
.kubectl config view --minify
dir-b
and repeat from step 2You could also setup other env vars in the .envrc
that could help you manage these different clusters (maybe a different kops state store).