Unable to access Kubernetes cluster using the go client when CLOUDSDK_CONFIG is set

10/8/2018

I have a kubernetes cluster on GKE. Even after setting KUBECONFIG="/tmp/kubeconfigxvz" correctly, when I execute kubectl get pods the command fails with the following error

bash-4.3# kubectl get pods
Unable to connect to the server: error executing access token command 
"/google-cloud-sdk/bin/gcloud config config-helper --format=json": err=exit 
status 1 output= stderr=ERROR: (gcloud.config.config-helper) You do not 
currently have an active account selected.
Please run:

  $ gcloud auth login

to obtain new credentials, or if you have already logged in with a
different account:

  $ gcloud config set account ACCOUNT

to select an already authenticated account to use.

When I set the CLOUDSDK_CONFIG=/tmp/customdir the command starts working.

How can I achieve the same with the go client?

\=== UPDATE ===

When creating the go client I pass the correct file pointer to this function clientcmd.BuildConfigFromFlags("", *tmpKubeConfigFile) where tmpKubeConfigFile points to /tmp/kubeconfigxvz. But I think this is not sufficient, the go-client also needs some more information from the CLOUDSDK_CONFIG directory, I think it needs the session information or credentials or something.

Is it possible to pass this CLOUDSDK_CONFIG too when creating the go-client?

BuildConfigFromFlags that takes in input the pointer to kubeconfig file and returns a config object, which can be passed to kubernetes.NewForConfig(config) which creates the client. Is it possible or does there exists a similar function to pass the CLOUDSDK_CONFIG and returns a go-client or create a config?

-- Mradul
kubernetes
kubernetes-go-client

1 Answer

10/8/2018

You basically need to create a ~/.kube/config file to access your GKE cluster directly.

You can see in this go client example that it's picking up the config from ~/.kube/config

A GKE config would look something like this:

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: [REDACTED]
    server: https://x.x.x.x
  name: gke_project_us-central1-a_your-first-cluster-1
contexts:
- context:
    cluster: gke_project_us-central1-a_your-first-cluster-1
    user: gke_project_us-central1-a_your-first-cluster-1
  name: gke_project_us-central1-a_your-first-cluster-1
current-context: gke_project_us-central1-a_your-first-cluster-1
kind: Config
preferences: {}
users:
- name: gke_project_us-central1-a_your-first-cluster-1
  user:
    auth-provider:
      config:
        cmd-args: config config-helper --format=json
        cmd-path: /google/google-cloud-sdk/bin/gcloud
        expiry-key: '{.credential.token_expiry}'
        token-key: '{.credential.access_token}'
      name: gcp

You would have to change the users section with something like:

- name: myuser
  user:
     token: [REDACTED]

The user is a service account with a token, if you want to add this user to manage everything in your cluster you can ClusterRoleBind it to an admin role.

For more information about RBAC, ServiceAccounts, Roles, ClusterRoles, and Users you can see here.

Btw, unfortunately, GKE doesn't give you access to the master node so you can't create certificate authentication because you don't have access to the CA.key file.

-- Rico
Source: StackOverflow