When I use Terraform to create a cluster in GKE everything works fine and as expected.
After the cluster is created, I want to then use Terraform to deploy a workload.
My issue is, how to be able to point at the correct cluster, but I'm not sure I understand the best way of achieving this.
I want to automate the retrieval of the clusters kubeconfig file- the file which is generally stored at ~/.kube/config
. This file is updated when users run this command manually to authenticate to the correct cluster.
I am aware if this file is stored on the host machine (the one I have Terraform running on) that it's possible to point at this file to authenticate to the cluster like so:
provider kubernetes {
# leave blank to pickup config from kubectl config of local system
config_path = "~/.kube/config"
}
However, running this command to generate the kubeconfig requires Cloud SDK to be installed on the same machine that Terraform is running on, and its manual execution doesn't exactly seem very elegant.
I am sure I must be missing something in how to achieve this.
Is there a better way to retrieve the kubeconfig file via Terraform from a cluster created by Terraform?
Actually, there is another way to access to fresh created gke.
data "google_client_config" "client" {}
provider "kubernetes" {
load_config_file = false
host = google_container_cluster.main.endpoint
cluster_ca_certificate = base64decode(google_container_cluster.main.master_auth.0.cluster_ca_certificate)
token = data.google_client_config.client.access_token
}
Basically in one step create you cluster. Export the kube config file to S3 for example.
In another step retrieve the file and move to the default folder. Terraform should work following this steps. Then you can apply your obejcts to cluster created previuoly.
I am deplyoing using gitlabCi pipeline, I have one repository code for k8s cluster (infra) and another with the k8s objects. The first pipeline triggers the second.