Generally you can use kops get secrets kube --type secret -oplaintext
, but I am not running on AWS and am using GCP.
I read that kubectl config view
should show you this info, but I see no such thing (wondering if this has to do with GCP serviceaccount setup, am also using GKE).
The kubectl config view
returns something like:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: REDACTED
server: https://MY_IP
name: MY_CLUSTER_NAME
contexts:
- context:
cluster: MY_CLUSTER_NAME
user: MY_CLUSTER_NAME
name: MY_CLUSTER_NAME
current-context: MY_CONTEXT_NAME
kind: Config
preferences: {}
users:
- name: MY_CLUSTER_NAME
user:
auth-provider:
config:
access-token: MY_ACCESS_TOKEN
cmd-args: config config-helper --format=json
cmd-path: /usr/lib/google-cloud-sdk/bin/gcloud
expiry: 2019-02-27T03:20:49Z
expiry-key: '{.credential.token_expiry}'
token-key: '{.credential.access_token}'
name: gcp
Neither Username\=>Admin
or Username\=>MY_CLUSTER_NAME
worked with Password\=>MY_ACCESS_TOKEN
Any ideas?
Try:
gcloud container clusters describe ${CLUSTER} \
--flatten="masterAuth"
[--zone=${ZONE}|--region=${REGION} \
--project=${PROJECT}
It's possible that your cluster has basic authentication (username|password) disabled as this authentication mechanism is discouraged.
An alternative mechanism provided with Kubernetes Engine is (as shown in your config) is to use your gcloud
credentials to get you onto the cluster.
The following command will configure ~/.kube/config
so that you may access the cluster using your gcloud
credentials. It looks as though this step has been done and you can use kubectl
directly.
gcloud container clusters get-credentials ${CLUSTER} \
[--zone=${ZONE}|--region=${REGION}] \
--project=${PROJECT}
As long as you're logged in using gcloud
with an account that's permitted to use the cluster, you should be able to:
kubectl cluster-info
kubectl get nodes