I am trying to use Terraform Helm provider (https://www.terraform.io/docs/providers/helm/index.html) to deploy a workload to GKE cluster.
I am more or less following Google's example - https://github.com/GoogleCloudPlatform/terraform-google-examples/blob/master/example-gke-k8s-helm/helm.tf, but I do want to use RBAC by creating the service account manually.
My helm.tf looks like this:
variable "helm_version" {
default = "v2.13.1"
}
data "google_client_config" "current" {}
provider "helm" {
tiller_image = "gcr.io/kubernetes-helm/tiller:${var.helm_version}"
install_tiller = false # Temporary
kubernetes {
host = "${google_container_cluster.data-dome-cluster.endpoint}"
token = "${data.google_client_config.current.access_token}"
client_certificate = "${base64decode(google_container_cluster.data-dome-cluster.master_auth.0.client_certificate)}"
client_key = "${base64decode(google_container_cluster.data-dome-cluster.master_auth.0.client_key)}"
cluster_ca_certificate = "${base64decode(google_container_cluster.data-dome-cluster.master_auth.0.cluster_ca_certificate)}"
}
}
resource "helm_release" "nginx-ingress" {
name = "ingress"
chart = "stable/nginx-ingress"
values = [<<EOF
rbac:
create: false
controller:
stats:
enabled: true
metrics:
enabled: true
service:
annotations:
cloud.google.com/load-balancer-type: "Internal"
externalTrafficPolicy: "Local"
EOF
]
depends_on = [
"google_container_cluster.data-dome-cluster",
]
}
I am getting the following error:
Error: Error applying plan:
1 error(s) occurred:
* module.data-dome-cluster.helm_release.nginx-ingress: 1 error(s) occurred:
* helm_release.nginx-ingress: error creating tunnel: "pods is forbidden: User \"client\" cannot list pods in the namespace \"kube-system\""
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
This happens after I manually created Helm RBAC and installed Tiller.
I also tried to set "install_tiller=true" before with exactly the same error when Tiller was installed
"kubectl get pods" works without any problems.
What is this user "client" and why it is forbidden from accessing the cluster?
Thanks
Creating resources for the service account and cluster role binding explicitly works for me:
resource "kubernetes_service_account" "helm_account" {
depends_on = [
"google_container_cluster.data-dome-cluster",
]
metadata {
name = "${var.helm_account_name}"
namespace = "kube-system"
}
}
resource "kubernetes_cluster_role_binding" "helm_role_binding" {
metadata {
name = "${kubernetes_service_account.helm_account.metadata.0.name}"
}
role_ref {
api_group = "rbac.authorization.k8s.io"
kind = "ClusterRole"
name = "cluster-admin"
}
subject {
api_group = ""
kind = "ServiceAccount"
name = "${kubernetes_service_account.helm_account.metadata.0.name}"
namespace = "kube-system"
}
provisioner "local-exec" {
command = "sleep 15"
}
}
provider "helm" {
service_account = "${kubernetes_service_account.helm_account.metadata.0.name}"
tiller_image = "gcr.io/kubernetes-helm/tiller:${var.helm_version}"
#install_tiller = false # Temporary
kubernetes {
host = "${google_container_cluster.data-dome-cluster.endpoint}"
token = "${data.google_client_config.current.access_token}"
client_certificate = "${base64decode(google_container_cluster.data-dome-cluster.master_auth.0.client_certificate)}"
client_key = "${base64decode(google_container_cluster.data-dome-cluster.master_auth.0.client_key)}"
cluster_ca_certificate = "${base64decode(google_container_cluster.data-dome-cluster.master_auth.0.cluster_ca_certificate)}"
}
}