How to use updated docker image from ACR in AKS

2/14/2019

I have a local docker image that was pushed to private Azure Container Registry. Then in Azure Kubernetes Service I have a cluster where I am using this image - from ACR.

Now I wanted to update the image (realised that I needed to install zip and unzip). I started a local container, made changes, committed them and pushed the new image to ACR. Unfortunately, that`s not enough. My pods are still using the previous version of the image, without zip.

Bit more details and what I tried:

  • Inside the helm chart I am using "latest" tag;

  • Compared the digest sha of my local "latest" image and what I have in ACR - they are the same;

  • Started the "latest" container locally (docker run -it --rm -p 8080:80 My-REPO.azurecr.io/MY-IMAGE:latest) - it has zip installed

  • Deleted existing pods in kubernetes; newly created ones are still missing zip

  • Deleted the release and recreated it - still nothing.

  • I am pushing to ACR using docker push MY-REPO.azurecr.io/MY-IMAGE:latest

So my question is - what am I missing? How to properly update this setup?

-- JleruOHeP
azure-acr
azure-aks
docker
kubernetes-helm

1 Answer

2/14/2019

You should be looking for a setup like this:

  1. Your Docker images have some unique tag, not latest; a date stamp will generally work fine.

  2. Your Helm chart should take the tag as a parameter in the values.yaml file.

  3. You should use a Kubernetes Deployment (not a bare Pod); in its pod spec part specify the image as something like image: MY-REPO.azurecr.io/MY-IMAGE:{{ .Values.tag }}.

  4. When you have a new build, you can helm update --set tag=20190214; this will push an updated Deployment spec to Kubernetes; and that that will cause it to create new Pods with the new image and then destroy the old Pods with the old image.

The essential problem you're running into is that some textual difference in the YAML file is important to make Kubernetes take some action. If it already has MY-IMAGE:latest, and you try to kubectl apply or equivalent the same pod or deployment spec with exactly the same image string, it will decide that nothing has changed and it doesn't need to do anything. Similarly, when you delete and recreate the pod, the node decides it already has a MY-IMAGE:latest image and doesn't need to go off and pull anything; it just reuses the same (outdated) image it already has.

Some best practices related to the workflow you describe:

  • Don't use a ...:latest image tag (or any other fixed string); instead, use some unique value like a timestamp, source control commit ID, or release version, where every time you do a deployment you'll have a different tag.

  • Don't use bare pods; use a higher-level controller instead, most often a Deployment.

  • Don't use docker commit ever. (If your image crashed in production, how would you explain "oh, I changed some stuff by hand, overwrote the image production is using, and forcibly restarted everything, but I have no record of what I actually did"?) Set up a Dockerfile, check it into source control, and use docker build to make images. (Better still, set up a CI system to build them for you whenever you check in.)

-- David Maze
Source: StackOverflow