So I've been struggling with how to deploy a dockerized application. The app consists of a react frontend, and an express API. My docker-compose.yml
for the development environment looks like the following:
version: '3'
services:
# Express Container
backend:
build: ./backend
expose:
- ${BACKEND_PORT}
env_file:
- ./.env
environment:
- PORT=${BACKEND_PORT}
ports:
- ${BACKEND_PORT}:${BACKEND_PORT}
volumes:
- ./backend:/backend
command: npm run devstart
links:
- mongo
# React Container
frontend:
build: './frontend'
expose:
- ${REACT_APP_PORT}
env_file:
- ./.env
environment:
- REACT_APP_BACKEND_PORT=${BACKEND_PORT}
ports:
- ${REACT_APP_PORT}:${REACT_APP_PORT}
volumes:
- ./frontend/src:/frontend/src
- ./frontend/public:/frontend/public
links:
- backend
command: npm start
mongo:
image: mongo
ports:
- "27017:27017"
But I've been struggling on how to structure it for production.
I've seen that there is basically 3 options:
I was thinking I would go with option 3, because it would keep development and production environments quite similar. (Please tell me if this is bad structure, this application is expected to receive a lot of traffic.)
Should I maybe forget docker-compose and create a multistage dockerfile that uses multistage builds to copy over frontend and backend code? That way I can deploy a single Docker container?
My folder structure looks like the following:
app/
.env
docker-compose.yml
docker-compose.prod.yml
.gitignore
frontend/
Dockerfile
... react stuff
backend
Dockerfile
.. express stuff
Am I going about this all wrong? How have you deployed your applications with docker-compose to production (preferably on kubernetes).
I can find tons of stuff about how to get this stuff running in development, but I'm lost when it comes to direction for deploying this type of stack.
I like option 1 more than 3 since it's keeping the frontend and the backend separate. A huge advantage is that you can host the frontend on something like AWS S3 with a CloudFront CDN. So all the static content is distributed to edge servers around the world. This will get the heavy items (images, large js libraries, css etc) to your end users very quickly and make your application function much faster.
I keep the front end and the back end as completely separate applications. They are each in their own GitHub repository, with their own test suites, Dockerfiles, Jenkins builds, everything. This allows us to version them independently which allows for more frequent iterations; smaller, lower risk deployments; and faster, more efficient development.
All calls to the back end are on the /API/ path which is handled by an nginx ingress controller (very simple) and routed appropriately to the back end service.
You might start with reading through the Kubernetes documentation and understanding what's straightforward and what's not. You're most interested in Deployments and Services, and possibly Ingress. The MongoDB setup with associated persistent state will be more complicated, and you might look at a prepackaged solution like the stable/mongodb Helm chart or MongoDB's official operator.
Note that an important part of the Kubernetes setup is that there will almost always be multiple Nodes, and you don't get a whole lot of control over which Node a Pod will be placed on. In particular that means that the Docker Compose volumes:
you show won't work well in a Kubernetes environment – in addition to doing all the normal Kubernetes deployment work, you'd also need to replicate the application source code to every node. That's twice the work for the same deployment. Usually you will want all of the application code to be contained in the Docker image, with a typical Node-based Dockerfile looking something like
FROM node:10
WORKDIR /app
COPY package.json yarn.lock ./
RUN yarn install
COPY ./ ./
RUN yarn build
EXPOSE 3000
CMD yarn start
Just within the docker-compose.yml
file you show:
The volumes:
make your containers substantially different from what you might run in production; delete them.
Don't bother making the container-internal ports configurable. In plain Docker, Docker Compose, and Kubernetes you can remap the container-internal port to an arbitrary externally-accessible port at deployment time. You can pick fixed numbers here and it's fine.
Several of the details you show, like the ports the container expose:
and the default command:
to run, are properly parts of the image (every time you run the image they will be identical), so move these into the Dockerfile.
links:
are redundant these days, and you can just delete them. In Docker Compose you can always reach the name of another service by the name of its service block.
The names of the other related services will be different in different environments. For example, MongoDB might be on localhost
when you're actually developing your application outside of Docker, mongo
in the configuration you show, mongo.myapp.svc.cluster.local
in Kubernetes, or you might choose to run it outside of Docker entirely. You'll generally want these to be configurable, usually with environment variables.
This gives you a docker-compose.yml
file a little more like:
version: '3'
services:
backend:
build: ./backend
environment:
- MONGO_URL: 'mongo://mongo'
ports:
- 3000:3000
frontend:
build: './frontend'
environment:
- BACKEND_URL: 'http://backend'
ports:
- 8000:8000
mongo:
image: mongo
ports:
- "27017:27017"
As @frankd hinted in their answers, it's also very common to use a tool like Webpack to precompile a React application down into a set of static files. Depending on how you're actually deploying this it could make sense to run this compilation step ahead of time and push those compiled Javascript and CSS files out to some other static-hosting service, and take it out of Docker/Kubernetes entirely.