I have added an ssl cert secret in rancher and configured the ingress file in the helm chart as follows:
{{- $fullName := include "api-chart.fullname" . -}}
{{- $ingressPath := .Values.ingress.path -}}
{{- $apiIngressPath := .Values.ingress.apiPath -}}
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ $fullName }}
labels:
app.kubernetes.io/name: {{ include "api-chart.name" . }}
helm.sh/chart: {{ include "api-chart.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
annotations:
kubernetes.io/ingress.class: nginx
{{- with .Values.ingress.annotations }}
{{ toYaml . | indent 4 }}
{{- end }}
spec:
tls:
- hosts:
- {{ .Values.ingress.host }}
secretName: {{ .Values.ssl.certSecretName }}
rules:
- host: {{ .Values.ingress.host }}
http:
paths:
- path: {{ $ingressPath }}
backend:
serviceName: {{ $fullName }}
servicePort: 80
- path: {{ $apiIngressPath }}
backend:
serviceName: {{ $fullName }}
servicePort: 8080
The default, fake, Nginx certificate is however still received when visiting the https site. Does the Nginx server also need to be changed? If so seems strange that it is a requirement to add the certificate info in two places. If not, what else could be wrong?
kubectl describe ingress
gives the following response:
Name: my-test-install-app72-project-jupyter-labs
Namespace: default
Address: 10.240.0.4
Default backend: default-http-backend:80 (<none>)
Rules:
Host Path Backends
---- ---- --------
project-jupyter-labs-2.company.com
/test72-new-user my-test-install-app72-project-jupyter-labs:80 (10.244.4.20:8888)
/base-url my-test-install-app72-project-jupyter-labs:8080 (10.244.4.20:8080)
Annotations:
field.cattle.io/publicEndpoints: [{"addresses":["10.240.0.4"],
"port":80,
"protocol":"HTTP",
"serviceName":"default:my-test-install-app72-project-jupyter-labs",
"ingressName":"default:my-test-install-app72-project-jupyter-labs",
"hostname":"project-jupyter-labs-2.company.com",
"path":"/test72-new-user",
"allNodes":false},
{"addresses":["10.240.0.4"],
"port":80,
"protocol":"HTTP",
"serviceName":"default:my-test-install-app72-project-jupyter-labs",
"ingressName":"default:my-test-install-app72-project-jupyter-labs",
"hostname":"project-jupyter-labs-2.company.com",
"path":"/base-url",
"allNodes":false}]
kubernetes.io/ingress.class: nginx
meta.helm.sh/release-name: my-test-install-app72
meta.helm.sh/release-namespace: default
nginx.ingress.kubernetes.io/proxy-body-size: 2G
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CREATE 81s nginx-ingress-controller Ingress default/my-test-install-app72-project-jupyter-labs
Normal CREATE 81s nginx-ingress-controller Ingress default/my-test-install-app72-project-jupyter-labs
Normal UPDATE 23s (x2 over 23s) nginx-ingress-controller Ingress default/my-test-install-app72-project-jupyter-labs
Normal UPDATE 23s (x2 over 23s) nginx-ingress-controller Ingress default/my-test-install-app72-project-jupyter-labs
UPDATE: I am having trouble accessing the error logs. It seems like you need to exec into the container as root to be able to see these. What I did find however is that the server section of the nginx.conf file contains the following:
ssl_certificate_by_lua_block {
certificate.call()
}
If I change this to ssl_certifacte and ssl_certifacte_key paths to the cert and key files that I manually added to the container, then it works.
Does the above ssl_certificate_by_lua_block
look normal for the ingress.yaml file? If so, what else could be the problem? If not, what could be causing this to not be probably configured?
Applying the following patch seems to allow the correct SSL certificate to be made available for https:
kubectl patch ingress <app-instance-name> -p '{"spec":{"tls":[{"hosts":["project-jupyter-labs-2.company.com"], "secretName": "tls-secret-name"}]}}'
Why this solves the problem is still unclear to me. I would appreciate any possible explanations.
Applying the following patch seems to allow the correct SSL certificate to be made available for https:
kubectl patch ingress <app-instance-name> -p '{"spec":{"tls":[{"hosts":["project-jupyter-labs-2.company.com"], "secretName": "tls-secret-name"}]}}'
Why this solves the problem is still unclear to me. I would appreciate any possible explanations.
It's nearly impossible to deduce it, without having a minimal reproducible example from you. Have a look how should minimal reproducible example look like.
We know nothing about your resulting Ingress manifest file (generated by helm), Ingress Controller version and its configuration (including way of installation), and underlying Kubernetes environment.
Just few hints:
Please remember that Ingress/Secret resources are namespaced objects, and so in your case Ingress should reference secret from the same namespace. How exactly do you create a TLS secret ?
I can assure you that your case can be reproduced in healthy Ingress Controller setup, and whenever I create secret referenced by Ingress in right namespace, it's automatically detected by controller, added to a local store, and dynamic reconfiguration takes place.
Lastly I think your issue is more suitable to be reported directly Nginx Ingress Controller github's project: https://github.com/kubernetes/ingress-nginx/issues/new