I have a terraform configuration that will create a GKE cluster, node pools and then call kubernetes to setup my app. When I run this configuration on a new project which doesn't have the cluster created yet the kubernetes provider throws below error
Error: Kubernetes cluster unreachable: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable
Error: Get "http://localhost/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin-binding": dial tcp [::1]:80: connect: connection refused
Error: Get "http://localhost/api/v1/namespaces/rabbitmq": dial tcp [::1]:80: connect: connection refused
If I comment out all kubernetes part, run terraform apply to create the cluster and then uncomment the kubernetes part and try it works fine and creates all kubernetes resource.
I checked the docs for kubernetes provider and it says the cluster should exists already.
There are at least 2 steps involved in scheduling your first container on a Kubernetes cluster. You need the Kubernetes cluster with all its components running somewhere and then schedule the Kubernetes resources, like Pods, Replication Controllers, Services etc.
How can I tell terraform to wait for the cluster to created before planning for kubernetes?
My config looks like below main.tf
.
.
.
module "gke" {
source = "./modules/gke"
name = var.gke_cluster_name
project_id = data.google_project.project.project_id
gke_location = var.gke_zone
.
.
.
}
data "google_client_config" "provider" {}
provider "kubernetes" {
version = "~> 1.13.3"
alias = "my-kuber"
host = "https://${module.gke.endpoint}"
token = data.google_client_config.provider.access_token
cluster_ca_certificate = module.gke.cluster_ca_certificate
load_config_file = false
}
resource "kubernetes_namespace" "ns" {
provider = kubernetes.my-kuber
depends_on = [module.gke]
metadata {
name = var.namespace
}
}