How to use the same nginx.conf file for reverse proxy for docker-compose and kubernetes

9/6/2018

in Kube, i have one pod with two containers * container 1: nginx reverse proxy * container 2: myapp

for testing purpose, i also has a docker compose file, include the two services * service 1 nginx reverse proxy * service 2: myapp

the issue is, in docker, the nginx upstream host is in the format of container name. In Kube, it is localhost. Here is a code snipt:

//for docker, nginx.conf
...
   upstream  web{
        server myapp:8080;
    }
....
            proxy_pass         http://web;

//for Kube, nginx.conf


   ...
    upstream  web{
        server localhost:8080;
    }
    ....

            proxy_pass         http://web;
    }

i would like to have one nginx.conf to support both kube and docker-compose. one way i can thinkof is to pass an env run time variable, so i can sed the upstream host in the entrypoint.sh.

are there other ways to accomplish this?

thank you

-- user5358058
docker
kubernetes
nginx
reverse-proxy

3 Answers

9/6/2018

I think the best way to do this is by using kubernetes Service You can match the docker-compose service name with the kubernetes service name. This way you don't have to change the nginx.conf file

-- fatcook
Source: StackOverflow

3/19/2019

I came across this question because we have the same issue.

I noticed the other answers suggested splitting nginx and the app-server into 2 different Services / Pods. Whilst that is certainly a solution, I rather like a self-contained Pod with both nginx and the app-server together. It works well for us, especially with php-fpm which can use a unix socket to communicate when in the same Pod which reduces internal http networking significantly.

Here is one idea:

Create a base nginx configuration file, for example, proxy.conf and setup docker to add it to the conf.d directory while building the image. The command is:

ADD proxy.conf /etc/nginx/conf.d/proxy.conf

In the proxy.conf, omit the upstream configuration, leaving that for later. Create another file, a run.sh file and add it to the image using the Dockerfile. The file could be as follows:

#!/bin/sh

(echo "upstream theservice { server $UPSTREAM_NAME:$UPSTREAM_PORT; }" && cat /etc/nginx/conf.d/proxy.conf) > proxy.conf.new
mv proxy.conf.new /etc/nginx/conf.d/proxy.conf

nginx -g 'daemon off;'

Finally, run the nginx from the run.sh script. The Dockerfile command:

CMD /bin/sh run.sh

The trick is that since the container is initialized like that, the configuration file does not get permanently written and the configuration is updated accordingly. Set the ENV vars appropriately depending on whether using from docker-compose or Kubernetes.


Let me also share a less proper solution which is more hacky but also simpler...

In Kubernetes, we change the docker image CMD so that it modifies the nginx config before the container starts. We use sed to update the upstream name to localhost to make it compatible with Kubernetes Pod networking. In our case it looks like this:

  - name: nginx
    image: our_custom_nginx:1.14-alpine
    command: ["/bin/ash"]
    args: ["-c", "sed -i 's/web/127.0.0.1/g' /etc/nginx/conf.d/default.conf && exec nginx -g 'daemon off;'"]

While this workaround works, it breaks the immutable infrastructure principle, so may not be a good candidate for everyone.

-- Tom
Source: StackOverflow

9/6/2018

You need to do two things.

First, split this up into two pods. (As a general rule, you should have one container in a pod; the typical exceptions to this are things like logging and networking "sidecar" containers that need to share filesystem and network space with the main container.) This probably means taking your existing pod spec (or better deployment spec), taking all of the stuff around the containers: block and making a second copy of it, and putting one container in each.

You need to make sure each of the pods has a distinct label (if you're using a deployment it's the label inside the pod template that matters); this might look something like

metadata:
  name: web
  labels:
    app: web

Second, you need to create a Kubernetes service that points at the "web" pod. This matches on the labels we just set

metadata:
  name: web
spec:
  selector:
    app: web

Now the name of the service will result in a DNS name web.default.svc.cluster.local existing (where "default" is the Kubernetes namespace name). default.svc.cluster.local gets set as a default DNS search domain, so web will resolve to the service will forward to the pod.

The Kubernetes documentation has a more complete example of this sort of thing (using PHP and nginx, but the only code is Kubernetes YAML manifests, so it should be pretty applicable).

-- David Maze
Source: StackOverflow