I'm having trouble getting an automatic redirect to occur from HTTP -> HTTPS for the default backend of the NGINX ingress controller for kubernetes where the controller is behind an AWS Classic ELB; is it possible?
According to the guide it seems like by default, HSTS is enabled
HTTP Strict Transport Security
HTTP Strict Transport Security (HSTS) is an opt-in security enhancement specified through the use of a special response header. Once a supported browser receives this header that browser will prevent any communications from being sent over HTTP to the specified domain and will instead send all communications over HTTPS.HSTS is enabled by default.
And redirecting HTTP -> HTTPS is enabled
Server-side HTTPS enforcement through redirect
By default the controller redirects HTTP clients to the HTTPS port 443 using a 308 Permanent Redirect response if TLS is enabled for that Ingress.
However, when I deploy the controller as configured below and navigate to http://<ELB>.elb.amazonaws.com
I am unable to get any response (curl reports Empty reply from server
). What I would expect to happen instead is I should see a 308 redirect to https then a 404.
This question is similar: Redirection from http to https not working for custom backend service in Kubernetes Nginx Ingress Controller but they resolved it by deploying a custom backend and specifying on the ingress resource to use TLS. I am trying to avoid deploying a custom backend and just simply want to use the default so this solution is not applicable in my case.
I've shared my deployment files on gist and have copied them here as well:
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: nginx-ingress-controller
namespace: ingress-nginx-sit
labels:
app.kubernetes.io/name: ingress-nginx-sit
app.kubernetes.io/part-of: ingress-nginx-sit
spec:
minReadySeconds: 2
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: '50%'
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx-sit
app.kubernetes.io/part-of: ingress-nginx-sit
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx-sit
app.kubernetes.io/part-of: ingress-nginx-sit
annotations:
prometheus.io/port: '10254'
prometheus.io/scrape: 'true'
spec:
serviceAccountName: nginx-ingress-serviceaccount
containers:
- name: nginx-ingress-controller
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.23.0
args:
- /nginx-ingress-controller
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --annotations-prefix=nginx.ingress.kubernetes.io
- --publish-service=$(POD_NAMESPACE)/ingress-nginx
- --ingress-class=$(POD_NAMESPACE)
- --election-id=leader
- --watch-namespace=$(POD_NAMESPACE)
securityContext:
allowPrivilegeEscalation: true
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
# www-data -> 33
runAsUser: 33
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: http
containerPort: 80
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
---
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-configuration
namespace: ingress-nginx-sit
labels:
app.kubernetes.io/name: ingress-nginx-sit
app.kubernetes.io/part-of: ingress-nginx-sit
data:
hsts: "true"
ssl-redirect: "true"
use-proxy-protocol: "false"
use-forwarded-headers: "true"
enable-access-log-for-default-backend: "true"
enable-owasp-modsecurity-crs: "true"
proxy-real-ip-cidr: "10.0.0.0/24,10.0.1.0/24" # restrict this to the IP addresses of ELB
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: ingress-nginx-sit
labels:
app.kubernetes.io/name: ingress-nginx-sit
app.kubernetes.io/part-of: ingress-nginx-sit
annotations:
# replace with the correct value of the generated certificate in the AWS console
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:<region>:<account>:certificate/<id>"
# Specify the ssl policy to apply to the ELB
service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy: "ELBSecurityPolicy-TLS-1-2-2017-01"
# the backend instances are HTTP
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
# Terminate ssl on https port
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "*"
# Ensure the ELB idle timeout is less than nginx keep-alive timeout. By default,
# NGINX keep-alive is set to 75s. If using WebSockets, the value will need to be
# increased to '3600' to avoid any potential issues.
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "60"
# Security group used for the load balancer.
service.beta.kubernetes.io/aws-load-balancer-extra-security-groups: "sg-xxxxx"
spec:
type: LoadBalancer
selector:
app.kubernetes.io/name: ingress-nginx-sit
app.kubernetes.io/part-of: ingress-nginx-sit
loadBalancerSourceRanges:
# Restrict allowed source IP ranges
- "192.168.1.1/16"
ports:
- name: http
port: 80
targetPort: http
# The range of valid ports is 30000-32767
nodePort: 30080
- name: https
port: 443
targetPort: http
# The range of valid ports is 30000-32767
nodePort: 30443
I think I found the problem.
For some reason the default server has force_ssl_redirect
set to false when determining if it should redirect the incoming request to HTTPS:
cat /etc/nginx/nginx.conf
notice the rewrite_by_lua_block
sends force_ssl_redirect = false
...
## start server _
server {
server_name _ ;
listen 80 default_server reuseport backlog=511;
set $proxy_upstream_name "-";
set $pass_access_scheme $scheme;
set $pass_server_port $server_port;
set $best_http_host $http_host;
set $pass_port $pass_server_port;
listen 443 default_server reuseport backlog=511 ssl http2;
# PEM sha: 601213c2dd57a30b689e1ccdfaa291bf9cc264c3
ssl_certificate /etc/ingress-controller/ssl/default-fake-certificate.pem;
ssl_certificate_key /etc/ingress-controller/ssl/default-fake-certificate.pem;
ssl_certificate_by_lua_block {
certificate.call()
}
location / {
set $namespace "";
set $ingress_name "";
set $service_name "";
set $service_port "0";
set $location_path "/";
rewrite_by_lua_block {
lua_ingress.rewrite({
force_ssl_redirect = false,
use_port_in_redirects = false,
})
balancer.rewrite()
plugins.run()
}
...
Then, the LUA code requires force_ssl_redirect
and redirect_to_https()
cat /etc/nginx/lua/lua_ingress.lua
...
if location_config.force_ssl_redirect and redirect_to_https() then
local uri = string_format("https://%s%s", redirect_host(), ngx.var.request_uri)
if location_config.use_port_in_redirects then
uri = string_format("https://%s:%s%s", redirect_host(), config.listen_ports.https, ngx.var.request_uri)
end
ngx_redirect(uri, config.http_redirect_code)
end
...
From what I can tell the force_ssl_redirect
setting is only controlled at the Ingress resource level through the annotation nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
. Because I don't have an ingress rule setup (this is meant to be the default server for requests that don't match any ingress), I have no way of changing this setting.
So what I determined I have to do is define my own custom server snippet on a different port that has force_ssl_redirect
set to true and then point the Service Load Balancer to that custom server instead of the default. Specifically:
Added to the ConfigMap
:
...
http-snippet: |
server {
server_name _ ;
listen 8080 default_server reuseport backlog=511;
set $proxy_upstream_name "-";
set $pass_access_scheme $scheme;
set $pass_server_port $server_port;
set $best_http_host $http_host;
set $pass_port $pass_server_port;
server_tokens off;
location / {
rewrite_by_lua_block {
lua_ingress.rewrite({
force_ssl_redirect = true,
use_port_in_redirects = false,
})
balancer.rewrite()
plugins.run()
}
}
location /healthz {
access_log off;
return 200;
}
}
server-snippet: |
more_set_headers "Strict-Transport-Security: max-age=31536000; includeSubDomains; preload";
Note I also added the server-snippet
to enable HSTS correctly. I think because the traffic from the ELB to NGINX is HTTP not HTTPS, the HSTS headers were not being correctly added by default.
Added to the DaemonSet
:
...
ports:
- name: http
containerPort: 80
- name: http-redirect
containerPort: 8080
...
Modified the Service
:
...
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
...
ports:
- name: http
port: 80
targetPort: http-redirect
# The range of valid ports is 30000-32767
nodePort: 30080
- name: https
port: 443
targetPort: http
# The range of valid ports is 30000-32767
nodePort: 30443
...
And now things seem to be working. I've updated the Gist so it includes the full configuration that I am using.