I'm struggling with jx, kubernetes and helm. I run a Jenkinsfile on jx executing commands in env directory:
sh 'jx step helm build'
sh 'jx step helm apply'
It finishes with success and deploys pods/creates deployment etc. however, helm list is empty.
When I execute something like helm install ...
or helm upgrade --install ...
it creates a release and helm list shows that.
Is it correct behavior?
More details:
EKS installed with:
eksctl create cluster --region eu-west-2 --name integration --version 1.12 \
--nodegroup-name integration-nodes \
--node-type t3.large \
--nodes 3 \
--nodes-min 1 \
--nodes-max 10 \
--node-ami auto \
--full-ecr-access \
--vpc-cidr "172.20.0.0/16"
Then I set up ingresses (external & internal) with some kubectly apply
command (won't share the files). Then I set up routes and vpc related stuff.
JX installed with:
jx install --provider=eks --ingress-namespace='internal-ingress-nginx' \
--ingress-class='internal-nginx' \
--ingress-deployment='nginx-internal-ingress-controller' \
--ingress-service='internal-ingress-nginx' --on-premise \
--external-ip='#########' \
--git-api-token=######### \
--git-username=######### --no-default-environments=true
Details from the installation:
? Select Jenkins installation type: Static Jenkins Server and Jenkinsfiles
? Would you like wait and resolve this address to an IP address and use it for the domain? No
? Domain ###########
? Cloud Provider eks
? Would you like to register a wildcard DNS ALIAS to point at this ELB address? Yes
? Your custom DNS name: ###########
? Would you like to enable Long Term Storage? A bucket for provider eks will be created No
? local Git user for GitHub server: ###########
? Do you wish to use GitHub as the pipelines Git server: Yes
? A local Jenkins X versions repository already exists, pull the latest? Yes
? A local Jenkins X cloud environments repository already exists, recreate with latest? Yes
? Pick default workload build pack: Kubernetes Workloads: Automated CI+CD with GitOps Promotion
Then I set up helm:
kubectl apply -f tiller-rbac-config.yaml
helm init --service-account tiller
where tiller-rbac-config.yaml is:
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
helm version says:
Client: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
jx version says:
NAME VERSION
jx 2.0.258
jenkins x platform 2.0.330
Kubernetes cluster v1.12.6-eks-d69f1b
helm client Client: v2.13.1+g618447c
git git version 2.17.1
Operating System Ubuntu 18.04.2 LTS
Applications were imported this way:
jx import --branches="devel" --org ##### --disable-updatebot=true --git-api-token=##### --url git@github.com:#####.git
And environment was created this way:
jx create env --git-url=##### --name=integration --label=Integration --domain=##### --namespace=jx-integration --promotion=Auto --git-username=##### --git-private --branches="master|devel|test"
Going throught the changelog, it seems that the tillerless mode has been made the default mode since version 2.0.246.
In Helm v2, Helm relies on its server side component called Tiller. The Jenkins X tillerless mode means that instead of using Helm to install charts, the Helm client is only used for templating and generating the Kubernetes manifests. But then, those manifests are applied normally using kubectl, not helm/tiller.
The consequence is that Helm won't know about this installations/releases, because they were made by kubectl. So that's why you won't get the list of releases using Helm. That's the expected behavior, as you can read on the Jenkins X docs.
What --no-tiller means is to switch helm to use template mode which means we no longer internally use helm install mychart to install a chart, we actually use helm template mychart instead which generates the YAML using the same helm charts and the standard helm confiugration management via --set and values.yaml files.
Then we use kubectl apply to apply the YAML.
As mentioned by James Strachan in the comments, when using the tillerless mode, you can view your deployments using jx step helm list