I am trying to install helm using the Helm Provider
using the following terraform script
data "google_client_config" "current" {}
provider "helm" {
tiller_image = "gcr.io/kubernetes-helm/tiller:${var.helm_version}"
max_history = 250
kubernetes {
host = "${google_container_cluster.eu.endpoint}"
token = "${data.google_client_config.current.access_token}"
client_certificate = "${base64decode(google_container_cluster.eu.master_auth.0.client_certificate)}"
client_key = "${base64decode(google_container_cluster.eu.master_auth.0.client_key)}"
cluster_ca_certificate = "${base64decode(google_container_cluster.eu.master_auth.0.cluster_ca_certificate)}"
}
}
resource "helm_release" "mydatabase" {
name = "mydatabase"
chart = "stable/mariadb"
set {
name = "mariadbUser"
value = "foo"
}
set {
name = "mariadbPassword"
value = "qux"
}
}
but I'm geting the following error
* helm_release.mydatabase: 1 error(s) occurred:
* helm_release.mydatabase: error installing: deployments.extensions is forbidden: User "client" cannot create deployments.extensions in the namespace "kube-system"
I think this is happening when terraform helm provider attempts to install tiller can anyone help
Ok, you are on the right way, but.. here I agree with @hk'.
helm_release.mydatabase: error installing: deployments.extensions is forbidden: User "client" cannot create deployments.extensions in the namespace "kube-system
The above error only belongs to authorization. Lots of people meet difficulties during the installation and configuration of Helm provider. For example github open issue. There are couple of ideas there that may help you.
What may work for you is described in this article: helm provider is Pain. There is a solution in it that works for people.
Try next:
resource "kubernetes_service_account" "tiller" {
metadata {
name = "tiller"
namespace = "kube-system"
}
automount_service_account_token = true
}
resource "kubernetes_cluster_role_binding" "tiller" {
metadata {
name = "tiller"
}
role_ref {
kind = "ClusterRole"
name = "cluster-admin"
api_group = "rbac.authorization.k8s.io"
}
subject {
kind = "ServiceAccount"
name = "tiller"
api_group = ""
namespace = "kube-system"
}
}
provider "helm" {
version = "~> 0.7"
debug = true
install_tiller = true
service_account = "${kubernetes_service_account.tiller.metadata.0.name}"
namespace = "${kubernetes_service_account.tiller.metadata.0.namespace}"
tiller_image = "gcr.io/kubernetes-helm/tiller:v2.11.0"
kubernetes {
config_path = "~/.kube/${var.env}"
}
}
or
resource "kubernetes_service_account" "tiller" {
metadata {
name = "tiller"
namespace = "kube-system"
}
}
resource "kubernetes_cluster_role_binding" "tiller" {
metadata {
name = "tiller"
}
subject {
api_group = "rbac.authorization.k8s.io"
kind = "User"
name = "system:serviceaccount:kube-system:tiller"
}
role_ref {
api_group = "rbac.authorization.k8s.io"
kind = "ClusterRole"
name = "cluster-admin"
}
depends_on = ["kubernetes_service_account.tiller"]
}
provider "helm" {
tiller_image = "gcr.io/kubernetes-helm/tiller:v2.12.3"
install_tiller = true
service_account = "tiller"
namespace = "kube-system"
}
It's role and authorization related issue. reset helm using "helm reset" then run below command to resolve your issue.
curl https://raw.githubusercontent.com/helm/helm/master/scripts/get > get_helm.sh
chmod 700 get_helm.sh
./get_helm.sh
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
helm init
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'