I would like to manage configuration for a service using terraform to a GKE cluster defined using external terraform script.
I created the configuration using kubernetes_secret
.
Something like below
resource "kubernetes_secret" "service_secret" {
metadata {
name = "my-secret"
namespace = "my-namespace"
}
data = {
username = "admin"
password = "P4ssw0rd"
}
}
And I also put this google client configuration to configure the kubernetes provider.
data "google_client_config" "current" {
}
data "google_container_cluster" "cluster" {
name = "my-container"
location = "asia-southeast1"
zone = "asia-southeast1-a"
}
provider "kubernetes" {
host = "https://${data.google_container_cluster.cluster.endpoint}"
token = data.google_client_config.current.access_token
cluster_ca_certificate = base64decode(data.google_container_cluster.cluster.master_auth[0].cluster_ca_certificate)
}
when I apply the terraform it shows error message below
data.google_container_cluster.cluster.endpoint is null
Do I miss some steps here?
I just had the same/similar issue when trying to initialize the kubernetes provider from a google_container_cluster data source. terraform show
just displayed all null values for the data source attributes. The fix for me was to specify the project in the data source, e.g.,
data "google_container_cluster" "cluster" {
name = "my-container"
location = "asia-southeast1"
zone = "asia-southeast1-a"
project = "my-project"
}
project - (Optional) The project in which the resource belongs. If it is not provided, the provider project is used.
In my case the google provider was pointing to a different project than the one containing the cluster I wanted to get info about.
In addition, you should be able to remove the zone
attribute from that block. location
should refer to the zone if it is a zonal cluster or the region if it is a regional cluster.