Upgrading JupyterHub helm release w/ new docker image, but old image is being used?

3/8/2019

I have a JupyterHub notebook server, and I am running on managed kubernetes on aws (EKS). My docker repository is AWS ECR.

I am iteratively developing my docker image for testing.

My workflow is:

  1. Update the docker image
  2. Update docker image tag in helm release config config.yaml
  3. Upgrade helm release helm upgrade jhub jupyterhub/jupyterhub --version=0.7.0 --values config.yaml
  4. Test the changes to docker image

However, the old docker image is still being used?

How must I change my development workflow, so that I can simply update docker image, and test?

Additional info:

Edit:

Additional troubleshooting steps taken:

Tried deleting the helm release and re-installing:

helm delete --purge jhub && helm upgrade --install jhub jupyterhub/jupyterhub --namespace jhub --version=0.7.0 --values config.yaml

Tried deleting the helm release AND namespace, and re-installing:

helm delete --purge jhub && kubectl delete namespace jhub && helm upgrade --install jhub jupyterhub/jupyterhub --namespace jhub --version=0.7.0 --values config.yaml

Also tried overriding imagePullPolicy value to Always (per Mostafa's suggestion in his answer)

hub:
  imagePullPolicy: Always

None of these work. Old, original docker image is still being used.

What is strange, is that when I inspect the docker images currently being used in my kubernetes cluster, I see the new docker image. But it is not the one being used.

kubectl get pods --all-namespaces -o jsonpath="{.items[*].spec.containers[*].image}"

# output:
...
<AWS_ACCOUNT>.dkr.ecr.eu-west-1.amazonaws.com/<REPO>:NEW_TAG  # <-- not actually being used in jupyerhub
...

Edit(2): I checked pod description, and found strange event messsage: I checked one of my pod descriptions and saw a strange event message

  Normal  Pulled                  32m   kubelet,  <<REDACTED>>  Container image "<AWS_ACCOUNT>.dkr.ecr.eu-west-1.amazonaws.com/<REPO>:NEW_TAG" already present on machine

The image being referred to above is my new image, that I just uploaded to image repo. It is impossible for the image to already be downloaded on the cluster. Somehow, the hash is the same for both the original image and the new image, or this is a bug?

-- James Wierzba
amazon-web-services
docker
jupyter-notebook
kubernetes-helm

1 Answer

3/8/2019

The docker image might not updated due to having imagePullPolicy set to IfNotPresent which means the following according to kubernetes documentation:

The default pull policy is IfNotPresent which causes the Kubelet to skip pulling an image if it already exists. If you would like to always force a pull, you can do one of the following:

  • set the imagePullPolicy of the container to Always.
  • omit the imagePullPolicy and use :latest as the tag for the image to use.
  • omit the imagePullPolicy and the tag for the image to use.
  • enable the AlwaysPullImages admission controller

In your case you can set the value of imagePullPolicy to Awlays inside config.yaml while you deploying the new chart in order to make it pull the newest docker image of your code

# Add this in your config.yaml (check if hub: is already exist to avoid overriding it)
hub:
  imagePullPolicy: Always
-- Mostafa Hussein
Source: StackOverflow