How to get `provider "kubernetes"` authenticate with `google_container_cluster`

12/6/2019

Need to deploy cluster to GCP and setup, helm, ingress and some other stuff without manually running gcloud command. Tried many ways google_container_cluster with and without certs and user/pass. I get two kind of results:

Error: serviceaccounts is forbidden: User "system:anonymous" cannot create resource "serviceaccounts" in API group "" in the namespace "kube-system" or Error: serviceaccounts is forbidden: User "client" cannot create resource "serviceaccounts" in API group "" in the namespace "kube-system".

What I managed to understand is if I generate certs gke will have default user "client" corresponding to cert it will create otherwise it will keep default user "anonymous" - no user.

My issue is I cannot find way to tell google_container_cluster to use specific account nor tell provider "kubernetes" to take any user. Also cannot find a way to apply RBAC file to cluster without authenticating via gcloud.

-- Aram
google-kubernetes-engine
kubernetes
terraform

1 Answer

1/9/2020

I solved this issue by updating how Terraform connect to the Kubernetes cluster. As I change the backend to use "remote" (Terraform Cloud) it does not work anymore and I have the same kind of error message. It is because with "remote" backend Terraform doesn't use the local kubectl config.

see for example : https://github.com/terraform-providers/terraform-provider-kubernetes/issues/347

So I add a block to get the config

data "google_client_config" "default" {
}

Then I update from using "client_certificate" and "client_key" to "token":

provider "kubernetes" {
  load_config_file       = false
  host                   = data.google_container_cluster.gke-cluster.endpoint
  token                  = data.google_client_config.default.access_token
  cluster_ca_certificate = base64decode(data.google_container_cluster.gke-cluster.master_auth.0.cluster_ca_certificate)
}

Hope this can be useful for someone else.

-- Neoh59
Source: StackOverflow