Assuming a workflow where all container building happens on Google Container Builder(GCB), is a Dockerfile even necessary?
For example to build custom containers where we would like to have some packages installed or copy files from the local filesystem, I see most examples GCB still using Dockerfile (e.g. https://github.com/GoogleCloudPlatform/cloud-builders ) - is this because its not possible using cloudbuild.yaml?
The easiest tool to compose a Docker container image is still using the docker build
command with a Dockerfile
. It is very common to have a cloudbuild.yaml
file with a single step that uses the Dockerfile
in the root of your source, like so:
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/${PROJECT_ID}/my-image', '.']
images: ['gcr.io/${PROJECT_ID}/my-image']
So, for constructing a final container image, using docker build
and a Dockerfile
is the recommended approach. It will let you easily install dependency packages, for instance.
You can use your cloudbuild.yaml
to perform other operations, before or after your docker build
step. For instance, if you wanted to create a binary without packaging your SDK into the final image, it is easy to do that with cloudbuild.yaml
. Or, if you wanted to pull in some extra assets from Cloud Storage using gsutil
.
Container builder will also let you build and push as many images as you like, all tied to the same commit of your git repo (and they can include the commit sha in the image tag too) by having many steps that run docker build
. You could even run these builds in parallel.
See https://cloud.google.com/container-builder/docs/api/build-steps for how to add steps for arbitrary things.