I am trying to wrap my brain around the suggested workarounds for the lack of built-in HTTP->HTTPS redirection in ingress-gce, using GLBC. What I am struggling with is how to use this custom backend that is suggested as one option to overcome this limitation (e.g. in How to force SSL for Kubernetes Ingress on GKE).
In my case the application behind the load-balancer does not itself have apache or nginx, and I just can't figure out how to include e.g. apache (which I know way better than nginx) in the setup. Am I supposed to set apache in front of the application as a proxy? In that case I wonder what to put in the proxy config as one can't use those convenient k8s service names there...
Or should apache be set up as some kind of a separate backend, which would only get traffic when the client uses plain HTTP? In that case I am missing the separation of backends by protocol in the GCE load-balancer, and while I can see how that could be done manually, the ingress needs to be configured for that, and I can't seem to find any resources explaining how to actually do that.
For example, in https://github.com/kubernetes/ingress-gce#redirecting-http-to-https the "application" takes care of the forwaring (it seems to be built on nginx), and while that example works beautifully, it's not possible to do the same thing with the application I am talking about.
Basically, my setup is currently this:
http://<public ip>:80 -\
> GCE LB -> K8s pod running the application
https://<public_ip>:443 -/ (ingress-gce)
I know I could block HTTP altogether, but that'd ruin user experience when someone just typed in the domain name in the browser.
Currently I have these services set up for the LB:
kind: Service
apiVersion: v1
metadata:
name: myapp
spec:
type: LoadBalancer
ports:
- name: http
port: 80
targetPort: myapp
protocol: TCP
selector:
app: myapp
---
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: myapp-ingress
annotations:
ingress.kubernetes.io/rewrite-target: /
kubernetes.io/ingress.global-static-ip-name: "my-ip"
ingress.gcp.kubernetes.io/pre-shared-cert: "my-cert"
spec:
backend:
serviceName: myapp
servicePort: 80
rules:
- host: my.domain.name
http:
paths:
- path: /
backend:
serviceName: myapp
servicePort: 80
In addition I have GLBC bundled together with the application deployment:
apiVersion: v1
kind: ConfigMap
metadata:
name: glbc-configmap
data:
gce.conf: |
[global]
node-tags = myapp-k8s-nodepool
node-instance-prefix = gke-myapp-k8s-cluster
---
kind: Deployment
apiVersion: apps/v1beta2
metadata:
name: myapp
spec:
replicas: 1
selector:
matchLabels:
app: myapp
template:
metadata:
name: myapp
labels:
app: myapp
spec:
containers:
# START application container
- name: myapp
image: eu.gcr.io/myproject/myapp:latest
imagePullPolicy: Always
readinessProbe:
httpGet:
path: /ping
port: 8080
ports:
- name: myapp
containerPort: 8080
# END application container
# START GLBC container
- name: myapp-glbc
image: gcr.io/google_containers/glbc:0.9.7
livenessProbe:
httpGet:
path: /ping
port: 8080
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
volumeMounts:
- mountPath: /etc/glbc-configmap
name: cloudconfig
readOnly: true
args:
- --apiserver-host=http://localhost:8080
- --default-backend-service=myapp
- --sync-period=300s
- --config-file-path=/etc/glbc-configmap/gce.conf
I'd greatly appreciate any pointers in addition to more complete solutions.
Edit in May 2020: "HTTP(S) Load Balancing Rewrites and Redirects support is now in General Availability" as stated in https://issuetracker.google.com/issues/35904733#comment95 seems to mean that now it finally would be possible to implement proper rediction rules in the LB itself, without having to resort to having an extra pod or any other tweak of that kind. However, in case the below is of use to someone, I'll leave it there for reference.
I was able to find a solution, where the GCE LB directs traffic to Apache (of course this should work for any proxy) which runs as a deployment in K8s cluster. In Apache config, there's a redirect based on X-Forwarded-Proto header, and a reverse proxy rules that point to the application in the cluster.
apiVersion: v1
kind: ConfigMap
metadata:
name: apache-httpd-configmap
data:
httpd.conf: |
# Apache httpd v2.4 minimal configuration
# This can be reduced further if you remove the accees log and mod_log_config
ServerRoot "/usr/local/apache2"
# Minimum modules needed
LoadModule mpm_event_module modules/mod_mpm_event.so
LoadModule log_config_module modules/mod_log_config.so
LoadModule mime_module modules/mod_mime.so
LoadModule dir_module modules/mod_dir.so
LoadModule authz_core_module modules/mod_authz_core.so
LoadModule unixd_module modules/mod_unixd.so
LoadModule alias_module modules/mod_alias.so
LoadModule proxy_module modules/mod_proxy.so
LoadModule proxy_http_module modules/mod_proxy_http.so
TypesConfig conf/mime.types
PidFile logs/httpd.pid
# Comment this out if running httpd as a non root user
User nobody
# Port to Listen on
Listen 8081
# In a basic setup httpd can only serve files from its document root
DocumentRoot "/usr/local/apache2/htdocs"
# Default file to serve
DirectoryIndex index.html
# Errors go to stderr
ErrorLog /proc/self/fd/2
# Access log to stdout
LogFormat "%h %l %u %t \"%r\" %>s %b" common
CustomLog /proc/self/fd/1 common
Mutex posixsem proxy
# Never change this block
<Directory />
AllowOverride None
Require all denied
</Directory>
# Deny documents to be served from the DocumentRoot
<Directory "/usr/local/apache2/htdocs">
Require all denied
</Directory>
<VirtualHost *:8081>
ServerName my.domain.name
# Redirect HTTP to load balancer HTTPS URL
<If "%{HTTP:X-Forwarded-Proto} -strcmatch 'http'">
Redirect / https://my.domain.name:443/
</If>
# Proxy the requests to the application
# "myapp" in the rules relies a K8s cluster add-on for DNS aliases
# see https://kubernetes.io/docs/concepts/services-networking/service/#dns
ProxyRequests Off
ProxyPass "/" "http://myapp:80/"
ProxyPassReverse "/" "http://myapp:80/"
</VirtualHost>
---
kind: Service
apiVersion: v1
metadata:
name: apache-httpd
spec:
type: NodePort
ports:
- name: http
port: 80
targetPort: apache-httpd
protocol: TCP
selector:
app: apache-httpd
---
kind: Deployment
apiVersion: apps/v1beta2
metadata:
name: apache-httpd
spec:
replicas: 1
selector:
matchLabels:
app: apache-httpd
template:
metadata:
name: apache-httpd
labels:
app: apache-httpd
spec:
containers:
# START apache httpd container
- name: apache-httpd
image: httpd:2.4-alpine
imagePullPolicy: Always
readinessProbe:
httpGet:
path: /
port: 8081
command: ["/usr/local/apache2/bin/httpd"]
args: ["-f", "/etc/apache-httpd-configmap/httpd.conf", "-DFOREGROUND"]
ports:
- name: apache-httpd
containerPort: 8081
volumeMounts:
- mountPath: /etc/apache-httpd-configmap
name: apacheconfig
readOnly: true
# END apache container
# END containers
volumes:
- name: apacheconfig
configMap:
name: apache-httpd-configmap
# END volumes
# END template spec
# END template
In addition to the above new manifest yaml, the rule for "myapp-ingress" needed to change so that instead of serviceName: myapp
it has serviceName: apache-httpd
to make the LB direct traffic to Apache.
It seems that this rather minimal Apache setup requires very little CPU and RAM, so it fits just fine in the existing cluster and thus doesn't really cause any direct extra cost.