I have a couple of client applications. For each one I have a build pipeline that gets latest code, compiles it and plots the result to a dist
folder (containing only html and js files).
These dist
folders are synced, using docker volume, to a web server (nginx
) container which actually hosts the client application.
The result is that my client is always "up" and I only need to update the dist
folder of any client to deploy it, and never need to mess with the web server container.
However, I want to move my deployment to a different approach, of only building the docker images on pipelines (code change) and using them on demand whenever deploying an environment.
The problem would be how to build the web server container while I don't want to rebuild all clients on any change, nor do I want to store the built output in source control. What would be the best approach?
You could consider a multi-stage build with:
The end result is an image with both the web server and the static files to serve (instead of those files being in a volume), with only the static files being rebuilt.