I have developed a terraform script to create a k8 cluster on GKE.
Post successful creation of cluster, I have set of yaml files to be applied on k8 cluster.
How can I invoke the below command in my terraform script?
kubectl create <.yaml>
You can use terraform local-exec to do this.
resource "aws_instance" "web" {
# ...
provisioner "local-exec" {
command = "echo ${aws_instance.web.private_ip} >> private_ips.txt"
}
}
Ref: https://www.terraform.io/docs/provisioners/local-exec.html
You can use the Terraform kubectl
third party provider. Follow the installation instructions here: Kubectl Terraform Provider
Then simply define a kubectl_manifest
pointing to your YAML file like:
# Get your cluster-info
data "google_container_cluster" "my_cluster" {
name = "my-cluster"
location = "us-east1-a"
}
# Same parameters as kubernetes provider
provider "kubectl" {
load_config_file = false
host = "https://${data.google_container_cluster.my_cluster.endpoint}"
token = "${data.google_container_cluster.my_cluster.access_token}"
cluster_ca_certificate = "${base64decode(data.google_container_cluster.my_cluster.master_auth.0.cluster_ca_certificate)}"
}
resource "kubectl_manifest" "my_service" {
yaml_body = file("${path.module}/my_service.yaml")
}
This approach has the big advantage that everything is obtained dynamically and does not rely on any local config file (very important if you run Terraform in a CI/CD server or to manage a multicluster environment).
The kubectl
provider also offers data sources that help to handle multiple files very easily. From the docs kubectl_filename_list:
data "kubectl_filename_list" "manifests" {
pattern = "./manifests/*.yaml"
}
resource "kubectl_manifest" "test" {
count = length(data.kubectl_filename_list.manifests.matches)
yaml_body = file(element(data.kubectl_filename_list.manifests.matches, count.index))
}
Extra points: You can templatize your yaml
files. I interpolate the cluster name in the multi-resource autoscaler yaml file as follows:
resource "kubectl_manifest" "autoscaler" {
yaml_body = templatefile("${path.module}/autoscaler.yaml", {cluster_name = var.cluster_name })
}
There are a couple of ways to achieve what you want to do.
You can use the Terraform resources template_file and null_resource.
Notice that I'm using the trigger to run the kubectl command always you modify the template (you may want to replace create with apply).
data "template_file" "your_template" {
template = "${file("${path.module}/templates/<.yaml>")}"
}
resource "null_resource" "your_deployment" {
triggers = {
manifest_sha1 = "${sha1("${data.template_file.your_template.rendered}")}"
}
provisioner "local-exec" {
command = "kubectl create -f -<<EOF\n${data.template_file.your_template.rendered}\nEOF"
}
}
But maybe the best way is to use the Kubernetes provider.
There are two ways to configure it:
kubectl config current-context
)provider "kubernetes" {
host = "https://104.196.242.174"
client_certificate = "${file("~/.kube/client-cert.pem")}"
client_key = "${file("~/.kube/client-key.pem")}"
cluster_ca_certificate = "${file("~/.kube/cluster-ca-cert.pem")}"
}
Once done that, you can create your own deployment pretty easy. For a basic pod, it'd be something as easy as:
resource "kubernetes_pod" "hello_world" {
metadata {
name = "hello-world"
}
spec {
container {
image = "my_account/hello-world:1.0.0"
name = "hello-world"
}
image_pull_secrets {
name = "docker-hub"
}
}
}
The best way would be to use the Kubernetes provider of Terraform