I have a null_resource
that is used to install/uninstall Kubernetes YAML manifests, and looks as follows:
resource "null_resource" "manifest_provisioner" {
count = var.enabled ? 1 : 0
triggers = {
manifest_file = <actual_content_of_the_yaml_manifest>
}
# Create-time provisioner
provisioner "local-exec" {
command = "kubectl apply -f -<<EOF\n${self.triggers.manifest_file}\nEOF"
}
# Destroy-time provisioner
provisioner "local-exec" {
when = destroy
command = "kubectl delete -f -<<EOF\n${self.triggers.manifest_file}\nEOF"
}
}
When adding a new manifest, or asking to delete it, the resource works as expected.
However, whenever I want to update an existing manifest, Terraform would first destroy and then create the resource, which means that an update would always incur a full deletion of Kubernetes objects which is problematic.
How can I instruct this resource to only run the create-time provisioner when updating an existing resource?