How to get resolved sha digest for all images within Kubernetes yaml?

9/20/2019

Docker image tags are mutable, in that image:latest and image:1.0 can both point to image@sha256:....., but when version 1.1 is released, image:latest stored within a registry can be pointed to an image with a different sha digest. Pulling an image with a particular tag now does not mean that an identical image will be pulled next time.

If a Kubernetes YAMl resource definition refers to an image by tag (not by digest), is there a means of determining what sha digest each image will actually resolve to, before the resource definition is deployed? Is this functionality supported using kustomize or kubectl?

Use case is wanting to determine what has actually been deployed in one environment before deploying to another (I'd like to take a hash of the resolved resource definition and could then use this to understand whether image:1.0 to be deployed to PROD refers to the same image:1.0 that was deployed to UAT).

Are there any tools that can be used to support this functionality?

For example, given the following YAML, is there a way of replacing all images with their resolved digests?

apiVersion: v1
kind: Pod
metadata:
  name: example
spec:
  containers:
    - name: image1
      image: image1:1.1
      command:
        - /bin/sh -c some command
    - name: image2
      image: image2:2.2
      command:
        - /bin/sh -c some other command

To get something like this:

apiVersion: v1
kind: Pod
metadata:
  name: example
spec:
  containers:
    - name: image1
      image: image1@sha:....
      command:
        - /bin/sh -c some command
    - name: image2
      image: image2@sha:....
      command:
        - /bin/sh -c some other command

I'd like to be able to do something like pipe yaml (that might come from cat, kustomize or kubectl ... --dry-run) through a tool and then pass to kubectl apply -f:

cat mydeployment.yaml | some-tool | kubectl apply -f -

EDIT:

The background to this is the need to be able to prove to auditors/regulators that what is about to be deployed to one env (PROD) is exactly what has been successfully deployed to another env (UAT). I'd like to use normal tags in the deployment template and at the time of deploying to UAT, take a snapshot of the template with the tags replaced with the digests of the resolved images. That snapshot will be what is deployed (via kubectl or similar). When deploying to PROD, that same snapshot will be used.

-- John
docker
kubernetes
kustomize

2 Answers

9/20/2019

is there a means of determining what sha digest each image will actually resolve to, before the resource definition is deployed?

No, and in the case you describe, it can vary by node. The Deployment will create some number of Pods, each Pod will get scheduled on some Node, and the Kubelet there will only pull the image if it doesn’t have something with that tag already. If you have two replicas, and you’ve changed the image a tag points to, then on node A it could use the older image that was already there, but on node B where there isn’t an image, it will pull and get the newer version.

The best practice here is to avoid changing the image a tag points to. Give each build coming out of your CI system a unique tag (a datestamp or source control commit ID for example) and use that in your Kubernetes object specifications. That avoids this problem entirely.

A workaround is to set

imagePullPolicy: Always

in your pod specs, which will force the node to pull a newer version, but this is unnecessary overhead in most cases.

-- David Maze
Source: StackOverflow

5/12/2020

This tool is supporting exactly what you need...

kbld: https://get-kbld.io/

Resolves name-tag pair reference (nginx:1.17) into digest reference (index.docker.io/library/nginx@sha256:2539d4344...)

Looks integrates quite well with templating tools like Kustomize or even Helm

-- agascon
Source: StackOverflow