How to manage software updates on docker-compose with one machine per user architecture?

11/20/2020

We are deploying a Java backend and React UI application using docker-compose. Our Docker containers are running Java, Caddy, and Postgres.

What's unusual about this architecture is that we are not running the application as a cluster. Each user gets their own server with their own subdomain. Everything is working nicely, but we need a strategy for managing/updating machines as the number of users grows.

We can accept some down time in the middle of the night, so we don't need to have high availability.

We're just not sure what would be the best way to update software on all machines. And we are pretty new to Docker and have no experience with Kubernetes or Ansible, Chef, Puppet, etc. But we are quick to pick things up.

We expect to have hundreds to thousands of users. Each machine runs the same code but has environment variables that are unique to the user. Our original provisioning takes care of that, so we do not anticipate having to change those with software updates. But a solution that can also provide that ability would not be a bad thing.

So, the question is, when we make code changes and want to deploy the updated Java jar or the React application, what would be the best way to get those out there in an automated fashion?

Some things we have considered:

Other things that we probably need include GitHub actions to build and update the Docker images.

We are open to ideas that are not listed here, because there is a lot we don't know about managing many machines running docker-compose. So please feel free to offer suggestions. Many thanks!

-- greymatter
ansible
devops
docker-compose
docker-watchtower
kubernetes

2 Answers

1/19/2021

As it turns out, we found that a paid Docker Hub plan addressed all of our needs. I appreciate the excellent information from @Malgorzata.

-- greymatter
Source: StackOverflow

11/25/2020

In your case I advice you to use Kubernetes combination with CD tools. One of it is Buddy. I think it is the best way to make such updates in an automated fashion. Of course you can use just Kubernetes, but with Buddy or other CD tools you will make it faster and easier. In my answer I am describing Buddy but there are a lot of popular CD tools for automating workflows in Kubernetes like for example: GitLab or CodeFresh.io - you should pick which one is actually best for you. Take a look: CD-automation-tools-Kubernetes.

With Buddy you can avoid most of these steps while automating updates - (executing kubectl apply, kubectl set image commands ) by doing a simple push to Git.

Every time you updates your application code or Kubernetes configuration, you have two possibilities to update your cluster: kubectl apply or kubectl set image.

Such workflow most often looks like:

1. Edit application code or configuration .YML file

2. Push changes to your Git repository

3. Build an new Docker image

4. Push the Docker image

5. Log in to your K8s cluster

6. Run kubectl apply or kubectl set image commands to apply changes into K8s cluster

Buddy is a CD tool that you can use to automate your whole K8s release workflows like:

  • managing Dockerfile updates
  • building Docker images and pushing them to the Docker registry
  • applying new images on your K8s cluster
  • managing configuration changes of a K8s Deployment etc.

With Buddy you will have to configure just one pipeline.

With every change in your app code or the YAML config file, this tool will apply the deployment and Kubernetes will start transforming the containers to the desired state.

Pipeline configuration for running Kubernetes pods or jobs

Assume that we have application on a K8s cluster and the its repository contains:

  • source code of our application
  • a Dockerfile with instructions on creating an image of your app
  • DB migration scripts
  • a Dockerfile with instructions on creating an image that will run the migration during the deployment (db migration runner)

In this case, we can configure a pipeline that will:

1. Build application and migrate images

2. Push them to the Docker Hub

3. Trigger the DB migration using the previously built image. We can define the image, commands and deployment and use YAML file.

4. Use either Apply K8s Deployment or Set K8s Image to update the image in your K8s application.

You can adjust above workflow properly to your environment/applications properties.

Buddy supports GitLab as a Git provider. Integration of these two tools is easy and only requires authorizing GitLab in your profile. Thanks to this integration you can create pipelines that will build, test and deploy your app code to the server. But of course if you are using GitLab there is no need to set up Buddy as an extra tool because GitLab is also CD tools tool for automating workflows in Kubernetes. More information you can find here: buddy-workflow-kubernetes.

Read also: automating-workflows-kubernetes.

-- Malgorzata
Source: StackOverflow