I have Minikube (v1.1.0) running locally with Helm (v2.13.1) initialized and connected the local docker daemon with Minikube running eval $(minikube docker-env)
. In the code base of my application I created a chart with helm create chart
. The first few lines of ./chart/values.yml
I changed to:
image:
repository: app-development
tag: latest
pullPolicy: Never
I build the image locally and install/upgrade the chart with Helm:
docker build . -t app-development
helm upgrade --install example ./chart
Now, this works perfect the first time, but if I make changes to the application I would like to run the above two commands to upgrade the image. Is there any way to get this working?
workaround
To get the expected behaviour I can delete the chart from Minikube and install it again:
docker build . -t app-development
helm del --purge example
helm install example ./chart
When you make a change like this, Kubernetes is looking for some change in the Deployment object. If it sees that you want 1 Pod running app-development:latest
, and it already has 1 Pod running an image named app-development:latest
, then it's in the right state and it doesn't need to do anything (even if the local image that has that tag has changed).
The canonical advice here is to never use the :latest
tag with Kubernetes. Every time you build an image, use a distinct tag (a time stamp or the current source control commit ID are easy unique things). With Helm it's easy enough to inject this based on a value you pass in:
image: app-development:{{ .Values.tag | default "latest" }}
This sort of build sequence would look a little more like
TAG=$(date +%Y%m%d-%H%m%S)
docker build -t "app-development:$TAG" .
helm upgrade --install --set "tag=$TAG"
If you're actively developing your component you may find it easier to try to separate out "hacking on code" from "deploying into Kubernetes" as much as you can. Some amount of this tends to be inevitable, but Kubernetes really isn't designed to be a live development environment.