terraform (0.11 and 0.12) apply works on 1 machine, but not on the other

6/6/2019

Working on 2 different windows 10 machines where 'terraform apply' works on one machine, but not on the other. Before moving to the second pc, i completely removed the infrastructure on gcp, and made sure i only kopied the tf file + the essential json. (no state files etc. ) Since preparing this for pipeline, i want to have a clean environment to start with

codesnippit (Full script at the end, further below):

provider "kubernetes" {
  host     = "https://${google_container_cluster.primary.endpoint}"
  username = "${var.username}"
  password = "${var.password}"
  client_certificate = "${base64decode(google_container_cluster.primary.master_auth[0].client_certificate)}"
  client_key = "${base64decode(google_container_cluster.primary.master_auth[0].client_key)}"
  cluster_ca_certificate = "${base64decode(google_container_cluster.primary.master_auth[0].cluster_ca_certificate)}"
  version = "~> 1.7"
}

# Namespace
resource "kubernetes_namespace" "testspace" {
  metadata {
    annotations = {
      name = "testspace"
    }
    name = "testspace"
  }
}

According to all examples i see, this should work, and it does, on my laptop, but on my second machine i get the following error:

Error: Failed to configure: username/password or bearer token may be set, but not both

  on Deploy_Test.tf line 1, in provider "kubernetes":
   1: provider "kubernetes" {

If I remove the username and password, the error disapears, but I can't create a namespace because i have no authorization? the error states:

Error: namespaces is forbidden: User "client" cannot create namespaces at the cluster scope

and now i'm getting a bit lost: This code runs fine on one pc, but not on the other, and i can't figure out why. When redploying this again from pc one, after starting in a new clean terraform folder Hopefully someone has an idea where to look ?

Tried the following so far:
updated to 0.12.1 - no difference.
downgraded to 0.11 - no difference.
Tried all different combinations of using certificate, or username/pw combo

provider "google" {
  credentials = file("account.json")
  project     = var.project
  region      = var.region
  version =  "~> 2.7"
}

resource "google_container_cluster" "primary" {
  name               = "${var.name}-cluster"
  location           = var.region
  initial_node_count = 1
  master_auth {
    username = var.username
    password = var.password
    /*
    client_certificate_config {
      issue_client_certificate = true
    }
    */
  }
  node_version       = "1.11.10-gke.4"
  min_master_version = "1.11.10-gke.4"
  node_config {
    preemptible  = true
    machine_type = "n1-standard-1"

    metadata = {
      disable-legacy-endpoints = "true"
    }

    oauth_scopes = [
      "https://www.googleapis.com/auth/compute",
      "https://www.googleapis.com/auth/devstorage.read_only",
      "https://www.googleapis.com/auth/logging.write",
      "https://www.googleapis.com/auth/monitoring",
    ]
  }
}
provider "kubernetes" {
  host     = "https://${google_container_cluster.primary.endpoint}"
  username = "${var.username}"
  password = "${var.password}"
  client_certificate = "${base64decode(google_container_cluster.primary.master_auth[0].client_certificate)}"
  client_key = "${base64decode(google_container_cluster.primary.master_auth[0].client_key)}"
  cluster_ca_certificate = "${base64decode(google_container_cluster.primary.master_auth[0].cluster_ca_certificate)}"
  version = "~> 1.7"
}

# Namespace
resource "kubernetes_namespace" "testspace" {
  metadata {
    annotations = {
      name = "testspace"
    }
    name = "testspace"
  }
}
-- Spike
google-cloud-platform
google-kubernetes-engine
terraform

2 Answers

6/7/2019

Found the cause of this: Previously i've had Docker Desktop installed. After removal, it has left some junk, in this case on c:\users\%username% there was a .kube folder left with a kube config file in there, containing the used certificates.
I zipped the folder content, and removed the folder. After that terraform works similar as on the other machines.

-- Spike
Source: StackOverflow

6/6/2019

You have two problems here, first:

Error: Failed to configure: username/password or bearer token may be set, but not both

is telling you that you can EITHER authenticate with username and password, OR with a bearer token. Your error appears to be sourced from here, first:

provider "kubernetes" {
  host     = "https://${google_container_cluster.primary.endpoint}"
  username = "${var.username}"
  password = "${var.password}"
  client_certificate = "${base64decode(google_container_cluster.primary.master_auth[0].client_certificate)}"
  client_key = "${base64decode(google_container_cluster.primary.master_auth[0].client_key)}"
  cluster_ca_certificate = "${base64decode(google_container_cluster.primary.master_auth[0].cluster_ca_certificate)}"
  version = "~> 1.7"
}

Basically you're pointing at the three .pem files AND you're trying to auth with user/pass. Choose one or the other. See this page about the kubernetes provider (Specifically "Statically defined credentials") for details about that particular error.

That said:

Error: namespaces is forbidden: User "client" cannot create namespaces at the cluster scope

Is telling you that you don't have the permissions to do what you're trying to do. Once you can identify what it's trying to authenticate as, you can identify what is wrong. What it appears to be is that your client_certificate, client_key, and/or cluster_ca_certificate are out of date on the second computer, but not the first. I believe it should be the cluster_ca_certificate that's out of date, if your gcloud config set container/use_client_certificate is true. This answer has more information about that.

If that's not the case, we will have to investigate further.

-- J. Olsson
Source: StackOverflow