I'm trying to deploy Jenkins that is fronted by an nginx-ingress via Helm. The goal is to secure Jenkins behind HTTPs with SSL termination at nginx. I'm currently using a self-signed cert but will eventually use cert-manager and LetsEncrypt. Jenkins and Nginx-Ingress are deployed in the default namespace.
Below is my deployment script:
gcloud config set compute/zone us-central1-f
gcloud container clusters create jenkins-cd \
--machine-type n1-standard-2 --num-nodes 2 \
--scopes "https://www.googleapis.com/auth/projecthosting,storage-rw,cloud-platform"
wget https://storage.googleapis.com/kubernetes-helm/helm-v2.9.1-linux-amd64.tar.gz
tar zxfv helm-v2.9.1-linux-amd64.tar.gz
cp linux-amd64/helm .
kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin --user=xxxx@xxxx.com
kubectl create clusterrolebinding tiller-admin-binding --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /tmp/tls.key -out /tmp/tls.crt -subj "/CN=xx.xx.xxxx.com"
kubectl create secret tls jenkins-ingress-ssl --key /tmp/tls.key --cert /tmp/tls.crt
kubectl describe secret jenkins-ingress-ssl
./helm init --service-account=tiller --wait
./helm update
./helm install --name nginx-ingress stable/nginx-ingress --set rbac.create=true
./helm install --name jenkins stable/jenkins --values values.yaml --version 0.19.0 --wait
ADMIN_PWD=$(kubectl get secret --namespace default cd-jenkins -o jsonpath="{.data.jenkins-admin-password}" | base64 --decode)
Below is my values.yaml file:
Master:
InstallPlugins:
- kubernetes:1.12.6
- workflow-job:2.24
- workflow-aggregator:2.5
- credentials-binding:1.16
- git:3.9.1
- google-oauth-plugin:0.6
- google-source-plugin:0.3
Cpu: "1"
Memory: "3500Mi"
JavaOpts: "-Xms3500m -Xmx3500m"
ServiceType: ClusterIP
HostName: "xx.xx.xxxx.com"
Ingress:
Annotations:
kubernetes.io/ingress.class: "nginx"
kubernetes.io/ingress.allow-http: "false"
TLS:
- secretName: jenkins-ingress-ssl
hosts:
- xx.xx.xxxx.com
Agent:
Enabled: true
Persistence:
Size: 100Gi
NetworkPolicy:
ApiVersion: networking.k8s.io/v1
rbac:
install: true
serviceAccountName: cd-jenkins
Deployments (default namespace)
xxx@cloudshell:~/stub-jenkins2.0 (automation-stub)$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
jenkins 1 1 1 1 6m
nginx-ingress-controller 1 1 1 1 6m
nginx-ingress-default-backend 1 1 1 1 6m
Services (default namespace)
xxx@cloudshell:~/stub-jenkins2.0 (automation-stub)$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
jenkins ClusterIP 10.11.240.123 <none> 8080/TCP 7m
jenkins-agent ClusterIP 10.11.250.174 <none> 50000/TCP 7m
kubernetes ClusterIP 10.11.240.1 <none> 443/TCP 8m
nginx-ingress-controller LoadBalancer 10.11.253.104 104.198.179.176 80:31453/TCP,443:32194/TCP 7m
nginx-ingress-default-backend ClusterIP 10.11.245.149 <none> 80/TCP 7m
Ingress (default namespace)
xxx@cloudshell:~/stub-jenkins2.0 (automation-stub)$ kubectl get ingress
NAME HOSTS ADDRESS PORTS AGE
jenkins xx.xx.xxxx.com 35.193.17.244 80, 443 7m
Ingress .yaml (generated by helm)
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.allow-http: "false"
kubernetes.io/ingress.class: nginx
creationTimestamp: 2018-10-19T17:35:16Z
generation: 1
name: jenkins
namespace: default
resourceVersion: "845"
selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/jenkins
uid: 57b76091-d3c5-11e8-b9e9-42010a8001de
spec:
rules:
- host: xx.xx.xxxx.com
http:
paths:
- backend:
serviceName: jenkins
servicePort: 8080
tls:
- hosts:
- xx.xx.xxxx.com
secretName: jenkins-ingress-ssl
status:
loadBalancer:
ingress:
- ip: 35.193.17.244
When hitting the ingress IP (https://104.198.179.176), I hit the default nginx backend service with a "default backend - 404" error. I suspect there might be something wrong with the ingress configuration. According to the ingress setup, there is the incorrect ingress IP (35.193.17.244) whereas the nginx-ingress-controller IP is 104.198.179.176.
If you hit the nginx ingress with https://104.198.179.176
you will always hit the default backend. You either need to hit it with https://xx.xx.xxxx.com
or with something like this:
$ curl -H 'Host: xx.xx.xxxx.com' https://104.198.179.176
With respect to the ingress IP address being incorrect, I would check that your backend service has endpoints and that each is listening on port 8080
.
$ kubectl describe svc jenkins
or/and
$ kubectl describe ep
I would also check the events in the Ingress:
$ kubectl describe ingress jenkins
Finally, I would check the logs in the ingress controller:
$ kubectl logs nginx-ingress-controller