I'm trying to configure the kubernetes provider in Terraform, however I've been unable to do it so far. EKS uses heptio authenticator, so I don't have certificate paths I can provide to the Kubernetes provider. What is the right way to accomplish this?
I already tried something like this:
provider "kubernetes" {
config_context_auth_info = "context1"
config_context_cluster = "kubernetes"
}
Getting as a result:
Error: Error applying plan:
1 error(s) occurred:
* kubernetes_namespace.example: 1 error(s) occurred:
* kubernetes_namespace.example: Post http://localhost/api/v1/namespaces: dial tcp [::1]:80: getsockopt: connection refused
I have a ~/.kube/config in place, what could I be missing?
For EKS provider using Terraform.
data "aws_region" "current" {}
data "aws_availability_zones" "available" {}
provider "kubernetes" {
config_context = "aws-test-terraform"
}
Such behaviour could be caused by known core bug: core: No interpolation for cross-provider dependencies #12393.
There is an issue on the Terraform Github that describes a similar case with the same error - #12869.
It is about issues related to GKE, but I guess it could affect EKS also.
Here is a link to gist with an example of using kubernetes_provider.
It also related to GKE, but I believe with slight changes it could be applied to EKS.
Consider also checking another good answer on StackOverflow related to your question.
In brief, the solution is to create Kubernetes cluster on the first stage and then create Kubernetes objects on the second stage.