Following de-facto standard way for conditionally adding and removing blocks (1, 2, 3), I am facing a difficulty with generating a plan when the block must be removed.
I have the following tf config. Note the dynamic
block:
provider "kubernetes" {}
variable secret {
type = string
}
resource "kubernetes_deployment" "sample-deployment" {
metadata {
name = "sample-deployment"
labels = {
app = "api"
}
}
spec {
selector {
match_labels = {
app = "sample"
}
}
template {
metadata {
labels = {
app = "sample"
}
}
spec {
dynamic image_pull_secrets {
for_each = compact([var.secret])
content {
name = var.secret
}
}
container {
name = "httpenv"
image = "jpetazzo/httpenv:latest"
}
}
}
}
}
Then I run 3 commands, one after another:
Initially create the resource:
terraform apply -var secret=
Deployment is created, and image_pull_secret
is not in the diff.
Set secret and update the resource:
terraform apply -var secret=my-secret
Diff for update contains:
+ image_pull_secrets {
+ name = "my-secret"
}
Remove the secret and update the resource again:
terraform apply -var secret=
The output is blank:
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Clearly, I'm missing something, as otherwise I would imagine this issue would have been brought up by now. What am I missing?
The version of terraform I'm using is v0.12.16
.
Update.
After running:
env TF_LOG=TRACE TF_LOG_PATH=logs.txt terraform apply -var secret=
I noticed this in the logs.txt
:
2019/12/01 12:17:34 [WARN] Provider "kubernetes" produced an invalid plan for kubernetes_deployment.sample-deployment, but we are tolerating it because it is using the legacy plugin SDK.
The following problems may be the cause of any confusing errors from downstream operations:
- .spec[0].template[0].spec[0].image_pull_secrets: block count in plan (1) disagrees with count in config (0)
- .spec[0].template[0].spec[0].container[0].resources: block count in plan (1) disagrees with count in config (0)
- .spec[0].strategy: block count in plan (1) disagrees with count in config (0)
Could this be related to the issue I'm facing?
looks like parts of the message that mention block
are coming from terraform code. So the issue I'm seeing must not be strictly related to kubernetes provider. Or is it?