I'm running jenkins in GKE. A step of the build is using kubectl
to deploy another cluster. I have gcloud-sdk installed in the jenkins container. The step of the build in question does this:
gcloud auth activate-service-account --key-file /etc/secrets/google-service-account
gcloud config set project XXXX
gcloud config set account xxxx@xxx.iam.gserviceaccount.com
gcloud container clusters get-credentials ANOTHER_CLUSTER
However I get this error (it works as expected locally though):
kubectl get pod
error: You must be logged in to the server (the server has asked for the client to provide credentials)
Note: I noticed that with no config at all (~/.kube is empty) I'm able to use kubectl and get access to the cluster where the pod is currently running. I'm not sure how it does that, does it use /var/run/secrets/kubernetes.io/serviceaccount/ to access the cluster
EDIT: Not tested if it works yet, but adding a service account to the target cluster and using that in jenkins might work:
http://kubernetes.io/docs/admin/authentication/ (search jenkins)
See this answer here: kubectl oauth2 authentication with container engine fails
What you need to do before doing gcloud auth activate-service-account --key-file /etc/secrets/google-service-account
is to set gcloud to the old mode of auth:
CLOUDSDK_CONTAINER_USE_CLIENT_CERTIFICATE=True
gcloud config set container/use_client_certificate True
I have not succeded however using the other env var: GOOGLE_APPLICATION_CREDENTIALS