Is it actually possible to use a kubernates secret in a terraform-deployed application. I am seeing some odd behaviour.
I define a cluster with appropriate node pool, a config map and a secret. The secret contains the service account key json data. I can then deploy my application using kubectl apply -f myapp-deploy.yaml
and it works fine. That tells me the cluster is all good, including the secret and config. However, when I try to deploy with terraform I get an error in what looks like the service account fetch:
2019-07-19 06:20:45.497 INFO [myapp,,,] 1 --- [main] b.c.PropertySourceBootstrapConfiguration : Located property source: SecretsPropertySource {name='secrets.myapp.null'}
2019-07-19 06:20:45.665 WARN [myapp,,,] 1 --- [main] io.fabric8.kubernetes.client.Config : Error reading service account token from: [/var/run/secrets/kubernetes.io/serviceaccount/token]. Ignoring.
2019-07-19 06:20:45.677 INFO [myapp,,,] 1 --- [main] n.c.m.s.myappApplication : The following profiles are active: test-dev
The middle line is the interesting one, it seems to be trying to read the service account from the wrong place.
I've walked the relevant settings from my yaml file over to my tf file but maybe I missed something. Here's what the yaml file looks like:
...
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: "/var/run/secret/cloud.google.com/myapp-sa.json"
volumeMounts:
- name: "service-account"
mountPath: "/var/run/secret/cloud.google.com"
ports:
- containerPort: 8080
volumes:
- name: "service-account"
secret:
secretName: "myapp"
...
And this yaml basically just works fine. Now the equivalent in my tf file looks like:
...
env {
name = "GOOGLE_APPLICATION_CREDENTIALS"
value = "/var/run/secret/cloud.google.com/myapp-sa.json"
}
volume_mount {
name = "myapp-sa"
mount_path = "/var/run/secret/cloud.google.com"
sub_path = ""
}
}
volume {
name = "myapp-sa"
secret {
secret_name = "myapp"
}
}
...
And this gives the above error. It seems to decide to look in /var/run/secrets/kubernetes.io/serviceaccount/token
for the service account token instead of where I told it to. But only when deployed by terraform. I'm deploying the same image, and into the same cluster with the same configmap. There's something wrong with my tf somewhere. I've tried importing from the yaml deploy but I couldn't see anything important that I missed.
FWIW this is a Spring Boot application running on GKE.
Hopefully someone knows the answer.
more info: I turned on debugging for io.fabric8.kubernetes and reran both scenarios ie terraform and yaml file. Here are the relevant log snippets:
Terraform:
2019-07-23 23:03:39.189 DEBUG [snakecharmer,,,] 1 --- [ main] io.fabric8.kubernetes.client.Config : Trying to configure client from Kubernetes config...
2019-07-23 23:03:39.268 DEBUG [snakecharmer,,,] 1 --- [ main] io.fabric8.kubernetes.client.Config : Did not find Kubernetes config at: [/root/.kube/config]. Ignoring.
2019-07-23 23:03:39.274 DEBUG [snakecharmer,,,] 1 --- [ main] io.fabric8.kubernetes.client.Config : Trying to configure client from service account...
2019-07-23 23:03:39.274 DEBUG [snakecharmer,,,] 1 --- [ main] io.fabric8.kubernetes.client.Config : Found service account host and port: 10.44.0.1:443
2019-07-23 23:03:39.282 DEBUG [snakecharmer,,,] 1 --- [ main] io.fabric8.kubernetes.client.Config : Did not find service account ca cert at: [/var/run/secrets/kubernetes.io/serviceaccount/ca.crt].
2019-07-23 23:03:39.285 WARN [snakecharmer,,,] 1 --- [ main] io.fabric8.kubernetes.client.Config : Error reading service account token from: [/var/run/secrets/kubernetes.io/serviceaccount/token]. Ignoring.
2019-07-23 23:03:39.291 DEBUG [snakecharmer,,,] 1 --- [ main] io.fabric8.kubernetes.client.Config : Trying to configure client namespace from Kubernetes service account namespace path...
2019-07-23 23:03:39.295 DEBUG [snakecharmer,,,] 1 --- [ main] io.fabric8.kubernetes.client.Config : Did not find service account namespace at: [/var/run/secrets/kubernetes.io/serviceaccount/namespace]. Ignoring.
Yaml:
2019-07-23 23:14:53.374 DEBUG [snakecharmer,,,] 1 --- [ main] io.fabric8.kubernetes.client.Config : Trying to configure client namespace from Kubernetes service account namespace path...
2019-07-23 23:14:53.375 DEBUG [snakecharmer,,,] 1 --- [ main] io.fabric8.kubernetes.client.Config : Found service account namespace at: [/var/run/secrets/kubernetes.io/serviceaccount/namespace].
2019-07-23 23:14:53.376 DEBUG [snakecharmer,,,] 1 --- [ main] io.fabric8.kubernetes.client.Config : Trying to configure client from Kubernetes config...
2019-07-23 23:14:53.377 DEBUG [snakecharmer,,,] 1 --- [ main] io.fabric8.kubernetes.client.Config : Did not find Kubernetes config at: [/root/.kube/config]. Ignoring.
2019-07-23 23:14:53.378 DEBUG [snakecharmer,,,] 1 --- [ main] io.fabric8.kubernetes.client.Config : Trying to configure client from service account...
2019-07-23 23:14:53.378 DEBUG [snakecharmer,,,] 1 --- [ main] io.fabric8.kubernetes.client.Config : Found service account host and port: 10.44.0.1:443
2019-07-23 23:14:53.383 DEBUG [snakecharmer,,,] 1 --- [ main] io.fabric8.kubernetes.client.Config : Found service account ca cert at: [/var/run/secrets/kubernetes.io/serviceaccount/ca.crt].
2019-07-23 23:14:53.384 DEBUG [snakecharmer,,,] 1 --- [ main] io.fabric8.kubernetes.client.Config : Found service account token at: [/var/run/secrets/kubernetes.io/serviceaccount/token].
2019-07-23 23:14:53.384 DEBUG [snakecharmer,,,] 1 --- [ main] io.fabric8.kubernetes.client.Config : Trying to configure client namespace from Kubernetes service account namespace path...
2019-07-23 23:14:53.384 DEBUG [snakecharmer,,,] 1 --- [ main] io.fabric8.kubernetes.client.Config : Found service account namespace at: [/var/run/secrets/kubernetes.io/serviceaccount/namespace].
It looks like the yaml deploy finds what it needs at /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
etc and the terraform deploy doesn't. As if there is a phantom volume mount in there that is missing in ** terraform**
I found the fix. The terraform deploy adds a automount_service_account_token = false
but the yaml default is for true
and that makes all the difference.
The switch is in the template.spec section of the kubernetes_deployment
in my tf file and that now looks like this snippet:
...
spec {
restart_policy = "Always"
automount_service_account_token = true
container {
port {
container_port = 8080
protocol = "TCP"
}
...
Setting the automount_service_account_token = true
is the fix and it comes up fine with that in place.