Docker - Upgrading base Image

10/30/2019

I have a base which is being used by 100 applications. All 100 applications have the common base image in their Dockerfile. Now, I am upgrading the base image for OS upgrade or some other upgrade and bumping up the version and I also tag with the latest version. Here, the problem is, whenever I change the base image, all 100 application needs to change the base image in their dockerfile and rebuild the app for using the latest base image. Is there any better way to handle this?

Note :- I am running my containers in Kubernetes and Dockerfile is there in GIT for each application.

-- user1578872
docker
docker-image
dockerfile
kubernetes

4 Answers

4/17/2020

whenever I change the base image, all 100 application needs to change the base image in their dockerfile and rebuild the app for using the latest base image.

That's a feature, not a bug; all 100 applications need to run their tests (and potentially fix any regressions) before going ahead with the new image...

There are tools out there to scan all the repos and automatically submit pull requests to the 100 applications (or you can write a custom one, if you don't have just plain "FROM" lines in Dockerfiles).

-- sabik
Source: StackOverflow

10/31/2019

You don't need to change the Dockerfile for each app if it uses base-image:latest. You'll have to rebuild app images though after base image update. After that you'll need to update the apps to use the new image.

For example using advises from this answer

-- rok
Source: StackOverflow

10/31/2019

You can use a Dockerfile ARG directive to modify the FROM line (see Understand how ARG and FROM interact in the Dockerfile documentation). One possible approach here would be to have your CI system inject the base image tag.

ARG base=latest
FROM me/base-image:${base}
...

This has the risk that individual developers would build test images based on an older base image; if the differences between images are just OS patches then you might consider this a small and acceptable risk, so long as only official images get pushed to production.

Beyond that, there aren't a lot of alternatives beyond modifying the individual Dockerfiles. You could script it

# Individually check out everything first
BASE=$(pwd)
TAG=20191031
for d in *; do
  cd "$BASE/$d"
  sed -i.bak "s@FROM me/base-image.*@FROM:me/base-image:$TAG/" Dockerfile
  git checkout -b "base-image-$TAG"
  git commit -am "Update Dockerfile to base-image:$TAG"
  git push
  hub pull-request --no-edit
done

There are automated dependency-update tools out there too and these may be able to manage the scripting aspects of it for you.

-- David Maze
Source: StackOverflow

10/30/2019

If you need to deploy that last version of base image, yes, you need to build, tag, push, pull and deploy each container again. If your base image is not properly tagged, you'll need to change your dockerfile on all 100 files.

But you have some options, like using sed to replace all occurrences in your dockerfiles, and execute all build commands from a sh file pointing to every app directory.

With docker-compose you may update your running 100 apps with one command:

docker stack deploy --compose-file docker-compose.yml

but still needs to rebuild the containers.

edit: with docker compose you can build too your 100 containers with one command, you need to define all of them in a compose file, check the docks for the compose file.

-- Jassiel Díaz
Source: StackOverflow