Our users are allowed to access Kubernetes clusters only from the management station, there is no possibility to access the API directly from their laptops/workstations.
Every user posses kubeconfig with relevant secrets belonging to this particular user. As the kubeconfig also contains the token used to authenticate against the Kubernetes API, it is not possible to store the kubeconfig "as is" on the management station file system.
Is there any way how to provide the token/kubeconfig to kubectl e.g. via STDIN, not exposing it to other users (e.g. admin of the management station) on the file system?
Activate account and download credentials using a service account.
gcloud auth activate-service-account --key-file=${PULL_KEYFILE} --project PROJECT_NAME
gcloud container clusters get-credentials CLUSTER_NAME --zone ZONE
//use kubectl as you would do
kubectl create namespace ${NAMESPACE} --dry-run -o yaml | kubectl apply -f -
So far I have used the following solution:
apiVersion: v1
kind: Config
preferences: {}
users:
- name: foo.bar
user:
token:
read -s TOKEN
kubectl --kubeconfig /home/foo.bar/kubeconfig --token $TOKEN get nodes