I work on a Kubernetes cluster based CI-CD pipeline. The pipeline runs like this:
Now part of these steps doesn't require the image to be pushed. E.g. the builder image can be cached or disposed at will - it would be rebuilt if needed.
So these images are named like mycompany/mvn-builder:latest
.
This works fine when used directly through Docker.
When Kubernetes and Helm comes, it wants the images URI's, and try to fetch them from the remote repo. So using the "local" name mycompany/mvn-builder:latest
doesn't work:
Error response from daemon: pull access denied for collab/collab-services-api-mvn-builder, repository does not exist or may require 'docker login'
Technically, I can name it <AWS-repo-ID>/mvn-builder
and push it, but that breaks the possibility to run all this locally in minikube
, because that's quite hard to keep authenticated against the silly AWS 12-hour token (remember it all runs in a cluster).
Is it possible to mix the remote repo and local cache? In other words, can I have Docker look at the remote repository and if it's not found or fails (see above), it would take the cached image?
So that if I use foo/bar:latest
in a Kubernetes resource, it will try to fetch, find out that it can't, and would take the local foo/bar:latest
?
I believe an initContainer
would do that, provided it had access to /var/run/docker.sock
(and your cluster allows such a thing) by conditionally pulling (or docker load
-ing) the image, such that when the "main" container
starts, the image will always be cached.
Approximately like this:
spec:
initContainers:
- name: prime-the-cache
image: docker:18-dind
command:
- sh
- -c
- |
if something_awesome; then
docker pull from/a/registry
else
docker load -i some/other/path
fi
volumeMounts:
- name: docker-sock
mountPath: /var/run/docker.lock
readOnly: true
containers:
- name: primary
image: a-local-image
volumes:
- name: docker-sock
hostPath:
path: /var/run/docker.sock