How can you use the kubectl tool (in a stateful/local way) for multiple managing multiple clusters from different directories simultaneously?

11/5/2018

Is there a way you can run kubectl in a 'session' such that it gets its kubeconfig from a local directory rather then from ~/.kubeconfig?

Example Use Case

Given the abstract nature of the question, it's worth describing why this may be valuable in an example. If someone had an application, call it 'a', and they had 4 kubernetes clusters, each running a, they may have a simple script which did some kubectl actions in each cluster to smoke test a new deployment of A, for example, they may want to deploy the app, and see how many copies of it were autoscaled in each cluster afterward.

Example Solution

As in git, maybe there could be a "try to use a local kubeconfig file if you can find one" as a git-style global setting:

kubectl global set-precedence local-kubectl

Then, in one terminal:

cd firstcluster
cat << EOF > kubeconfig
firstcluster
...
EOF
kubectl get pods
p4

Then, in another terminal:

cd secondcluster/
cat << EOF > kubeconfig
secondcluster
...
EOF
kubectl get pods
p1
p2
p3

Thus, the exact same kubectl commands (without having to set context) actually run against new clusters depending on the directory you are in.

Some ideas for solutions

  • One idea I had for this, was to write a kubectl-context plugin which somehow made kubectl always check for local kubeconfig, setting context behind the scenes if it could before running, to a context in a global config that matched the directory name.

  • Another idea I've had along these lines would be to create different users which each had different kubeconfig home files.

  • And of course, using something like virtualenv, you might be able to do something where kubeconfig files had their own different value.

Final thought

Ultimately I think the goal here is to subvert the idea that a ~/.kubeconfig file has any particular meaning, and instead look at ways that many kubeconfig files can be used in the same machine, however, not just using the --kubeconfig option but rather, in such a way that state is still maintained in a directory local manner.

-- jayunit100
kubeconfig
kubectl
kubernetes
local

1 Answer

11/5/2018

AFAIK, the config file is under ~/.kube/config and not ~/.kubeconfig. I suppose you are looking at an opinion on your answer, so you gave me the great idea about creating kubevm, inspired by awsvm for the AWS CLI, chefvm for managing multiple Chef servers and rvm for managing multiple Ruby versions.

So, in essence, you could have a kubevm setup that switches between different ~/.kube configs. You can use a CLI like this:

# Use a specific config
kubevm use {YOUR_KUBE_CONFIG|default}
# or
kubevm YOUR_KUBE_CONFIG

# Set your default config
kubevm default YOUR_KUBE_CONFIG

# List your configurations, including current and default
kubevm list

# Create a new config
kubevm create YOUR_KUBE_CONFIG

# Delete a config
kubevm delete YOUR_KUBE_CONFIG

# Copy a config
kubevm copy SRC_CONFIG DEST_CONFIG

# Rename a config
kubevm rename OLD_CONFIG NEW_CONFIG

# Open a config directory in $EDITOR
kubevm edit YOUR_KUBE_CONFIG

# Update kubevm to the latest
kubevm update

Let me know if it's useful!

-- Rico
Source: StackOverflow