How to create Kubernetes cluster serving its own container with SSL and NGINX

1/20/2016

I'm trying to build a Kubernetes cluster with following services inside:

  • Docker-registry (which will contain my django Docker image)
  • Nginx listenning both on port 80 and 443
  • PostgreSQL
  • Several django applications served with gunicorn
  • letsencrypt container to generate and automatically renew signed SSL certificates

My problem is a chicken and egg problem that occurs during the creation of the cluster:

My SSL certificates are stored in a secret volume that is generated by the letsencrypt container. To be able to generate the certificate, we need to show we are owner of the domain name and this is done by validating a file is accessible from the server name (basically this consist of Nginx being able to serve a staticfile over port 80)

So here occurs my first problem: To serve the static file needed by letsencrypt, I need to have nginx started. The SSL part of nginx can't be started if the secret hasn't been mounted and the secret is generated only when let's encrypt succeed...

So, a simple solution could be to have 2 Nginx containers: One listening only on port 80 that will be started first, then letsencrypt then we start a second Nginx container listening on port 443

-> This kind of look like a waste of resources in my opinion, but why not.

Now assuming I have 2 nginx containers, I want my Docker Registry to be accessible over https.

So in my nginx configuration, I'll have a docker-registry.conf file looking like:

upstream docker-registry {
  server registry:5000;
}

server {
  listen 443;
  server_name docker.thedivernetwork.net;

  # SSL
  ssl on;
  ssl_certificate /etc/nginx/conf.d/cacert.pem;
  ssl_certificate_key /etc/nginx/conf.d/privkey.pem;

  # disable any limits to avoid HTTP 413 for large image uploads
  client_max_body_size 0;

  # required to avoid HTTP 411: see Issue #1486 (https://github.com/docker/docker/issues/1486)
  chunked_transfer_encoding on;

  location /v2/ {
    # Do not allow connections from docker 1.5 and earlier
    # docker pre-1.6.0 did not properly set the user agent on ping, catch "Go *" user agents
    if ($http_user_agent ~ "^(docker\/1\.(3|4|5(?!\.[0-9]-dev))|Go ).*
quot;
) { return 404; } # To add basic authentication to v2 use auth_basic setting plus add_header auth_basic "registry.localhost"; auth_basic_user_file /etc/nginx/conf.d/registry.password; add_header 'Docker-Distribution-Api-Version' 'registry/2.0' always; proxy_pass http://docker-registry; proxy_set_header Host $http_host; # required for docker client's sake proxy_set_header X-Real-IP $remote_addr; # pass on real client's IP proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_read_timeout 900; } }

The important part is the proxy_pass that redirect toward the registry container.

The problem I'm facing is that my Django Gunicorn server also has its configuration file in the same folder django.conf:

upstream django {
    server django:5000;
}

server {
    listen 443 ssl;
    server_name example.com;
    charset     utf-8;

    ssl on;
    ssl_certificate /etc/nginx/conf.d/cacert.pem;
    ssl_certificate_key /etc/nginx/conf.d/privkey.pem;

    ssl_protocols        SSLv3 TLSv1 TLSv1.1 TLSv1.2;
    ssl_ciphers          HIGH:!aNULL:!MD5;
    client_max_body_size 20M;

    location / {
        # checks for static file, if not found proxy to app
        try_files $uri @proxy_to_django;
    }

    location @proxy_to_django {
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $http_host;
        proxy_redirect off;

        #proxy_pass_header Server;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Scheme $scheme;
        proxy_connect_timeout 65;
        proxy_read_timeout 65;

        proxy_pass   http://django;
    }

}

So nginx will successfully start only under 3 conditions:

  • secret is mounted (this could be addressed by splitting Nginx into 2 separate containers)
  • registry service is started
  • django service is started

The problem is that django image is pulling its image from the registry service, so we are in a dead-lock situation again.

I didn't mention it but both registry and django have different ServerName so nginx is able to both serve them

The solution I though about it (but it's quite dirty!) would be to reload nginx several time with more and more configurations:

  • I start docker registry service
  • I start Nginx with only the registry.conf
  • I create my django rc and service
  • I reload nginx with both registry.conf and django.conf

If there was a way to make nginx start ignoring failing configuration, that would probably solve my issues as well.

How can I cleanly achieve this setup?

Thanks for your help

Thibault

-- thibserot
kubernetes
nginx
sequencing

1 Answer

1/20/2016

Are you using Kubernetes Services for your applications?

With a Service to each of your Pods, you have a proxy for the Pods. Even if the pod is not started, as long as the Service is started nginx will find it when looking it up as the Service has an IP assigned.

So you start the Services, then start nginx and whatever Pod you want in the order you want.

-- MrE
Source: StackOverflow