Kubernetes pods created via terraform don't have a /var/run/secrets
folder, but a pod created according to the hello-minikube
tutorial does - why is that, and how can I fix it?
Motivation: I need traefik to be able to talk to the k8s cluster.
I have set up a local Kubernetes cluster w/ minikube and set up terraform to work with that cluster.
To set up traefik, you need to create an Ingress
and a Deployment
which are not yet supported by terraform. Based on the workaround posted in that issue, I use an even simpler module to execute yaml files via terraform:
# A tf-module that can create Kubernetes resources from YAML file descriptions.
variable "name" {}
variable "file_name" { }
resource "null_resource" "kubernetes_resource" {
triggers {
configuration = "${var.file_name}"
}
provisioner "local-exec" {
command = "kubectl apply -f ${var.file_name}"
}
}
The resources created in this way show up correctly in the k8s dashboard.
However, the ingress controller's pod logs:
time="2017-12-30T13:49:10Z"
level=error
msg="Error starting provider *kubernetes.Provider: failed to create in-cluster
configuration: open /var/run/secrets/kubernetes.io/serviceaccount/token:
no such file or directory"
(line breaks added for readability)
/bin/bash
ing into the pods, I realize none of them have a path /var/run/secrets
, except for the hello-minikube
pod from the minikube tutorial, which was created with just two kubectl
commands:
$ kubectl run hello-minikube --image=k8s.gcr.io/echoserver:1.4 --port=8080
$ kubectl expose deployment hello-minikube --type=NodePort
Compared to the script in the terraform issue, I removed kubectl
params like --certificate-authority=${var.cluster_ca_certificate}
, but then again, I didn't provide this when setting up hello-minikube
either, and the original script doesn't work as-is, since I couldn't figure out how to access the provider details from ~/.kube/config
in terraform.
I tried to find out if hello-minikube
is doing something fancy, but I couldn't find its source code.
Do I have to do something specific to make the token available? According to traefic issue 611, the InCluster-configuration should be automated, but as it currently stands I'm stuck.
Host system is a Windows 10 machine
> minikube version
minikube version: v0.24.1
> kubectl version
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.0", GitCommit:"925c127ec6b946659ad0fd596fa959be43f0cc05", GitTreeState:"clean", BuildDate:"2017-12-15T21:07:38Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.0", GitCommit:"0b9efaeb34a2fc51ff8e4d34ad9bc6375459c4a4", GitTreeState:"clean", BuildDate:"2017-11-29T22:43:34Z", GoVersion:"go1.9.1", Compiler:"gc", Platform:"linux/amd64"}
There are some related questions and github issues, but they didn't help me fix the problem yet either.
Foremost, thank you for an amazing question write up; I would use this question as a template for how others should ask!
Can you check the automountServiceAccountToken
field in the PodSpec
and see if it is true
?
The only other constructive question I know is whether its serviceAccountName
points to an existing S.A.; I would hope a bogus one would bomb the deploy, but don't know for sure what will happen in that case