Tiller: dial tcp 127.0.0.1:80: connect: connection refused

10/24/2019

From the time I have upgraded the versions of my eks terraform script. I keep getting error after error.

currently I am stuck on this error:

Error: Get http://localhost/api/v1/namespaces/kube-system/serviceaccounts/tiller: dial tcp 127.0.0.1:80: connect: connection refused

Error: Get http://localhost/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/tiller: dial tcp 127.0.0.1:80: connect: connection refused

The script is working fine and I can still use this with old version but I am trying to upgrade the cluster version .

provider.tf

provider "aws" {
  region  = "${var.region}"
  version = "~> 2.0"

  assume_role {
    role_arn = "arn:aws:iam::${var.target_account_id}:role/terraform"
  }
}

provider "kubernetes" {
  config_path = ".kube_config.yaml"
  version = "~> 1.9"
}

provider "helm" {
  service_account = "${kubernetes_service_account.tiller.metadata.0.name}"
  namespace       = "${kubernetes_service_account.tiller.metadata.0.namespace}"


  kubernetes {
    config_path = ".kube_config.yaml"
  }
}

terraform {
  backend "s3" {

  }
}

data "terraform_remote_state" "state" {
  backend = "s3"
  config = {
    bucket         = "${var.backend_config_bucket}"
    region         = "${var.backend_config_bucket_region}"
    key            = "${var.name}/${var.backend_config_tfstate_file_key}" # var.name == CLIENT
    role_arn       = "${var.backend_config_role_arn}"
    skip_region_validation = true
    dynamodb_table = "terraform_locks"
    encrypt        = "true"
  }
}

kubernetes.tf

resource "kubernetes_service_account" "tiller" {
  #depends_on = ["module.eks"]

  metadata {
    name      = "tiller"
    namespace = "kube-system"
  }

  automount_service_account_token = "true"
}

resource "kubernetes_cluster_role_binding" "tiller" {
  depends_on = ["module.eks"]

  metadata {
    name = "tiller"
  }

  role_ref {
    api_group = "rbac.authorization.k8s.io"
    kind      = "ClusterRole"
    name      = "cluster-admin"
  }

  subject {
    kind = "ServiceAccount"
    name = "tiller"

    api_group = ""
    namespace = "kube-system"
  }
}

terraform version: 0.12.12 eks module version: 6.0.2

-- gamechanger17
kubernetes
kubernetes-helm
terraform
terraform-provider-aws

1 Answer

10/25/2019

It means your server: entry in your .kube_config.yml is pointing to the wrong port (and perhaps even the wrong protocol, as normal kubernetes communication travels over https and is secured via mutual TLS authentication), or there is no longer a proxy that was listening on localhost:80, or perhaps the --insecure-port used to be 80 and is now 0 (as is strongly recommended)

Regrettably, without more specifics, no one can guess what the correct value was or should be changed to

-- mdaniel
Source: StackOverflow