Kubernetes Engine - Pod deployment not updating to the latest image

6/30/2019

I’m following this tutorial for Google Cloud Platform: https://cloud.google.com/kubernetes-engine/docs/tutorials/hello-app . Basically I cloned the example hello-app project from Github both on Google Cloud (using Google cloud shell) and locally on my machine because I want to practise doing this tutorial, using both the cloud approach and my local machine (using Google Cloud SDK) - where I would then push the Docker image to the cloud, build and run it thereafter on Kubernetes.

1- My first, when I got to step 8, where I changed the source code string from “Hello, World” to “Hello, world! Version 2”, and string “Version: 1.0.0” to “Version: 2.0.0” in the Go file, basically the lines:

fmt.Fprintf(w, "Hello, world! Version 2\n")
fmt.Fprintf(w, "Version: 2.0.0\n")

I realised that I changed the source code on my local machine (not the one on the cloud). I then went to the Google Cloud Shell in the Console, re-built a Docker image with v2 tag, then pushed it to the Google Container Registry (not realising that I’m building an image from the unchanged code project stored on the cloud, rather than the one from my local machine). When I applied a rolling update to the existing deployment with an image update using kubectl, not surprisingly, it didn’t work. So basically to fix this, I need to build an image from the (changed) source code project on my local machine and push that image to the Google Container Registry (using Google Cloud SDK shell). That’s the theory (I’m assuming at least from my understanding). I created an image with exactly the same tag (i.e. v2), bearing in mind there’s already a v2 image (with the unchanged code) stored in the Container Registry, done from my previous step. I wondered if it would simply overwrite the existing image in the Container Registry, which it did (looking the Container Registry > Images section, I can see a v2 image with Update showing just a few seconds ago). Now everything is set for the final step, which is apply a rolling update to the existing deployment with an image update:

kubectl set image deployment/hello-web hello-web=gcr.io/${PROJECT_ID}/hello-app:v2

This was successful as I got the response:

deployment.extensions/hello-web image updated

However, when I navigated to Kubernetes Engine > Workloads, to view the deployed app on Kubernetes, although the status shows = OK, in the Managed pods section it shows the pods running from yesterday and the day before (in the Created on date column), not the deployment of today (June 29th):

Kubernetes - Managed Pods

1 (a) - which brings me to side question (still relating to the first question), The Revision column in the table above, does this mean the number of times I deployed new pods from an image? Because indeed I did try this step a few times, in a vain attempt to fix the issue (I think it might’ve been 4 times). Going back to the main question, similarly if I try the external IP from the Load Balancer service, to load the site, it doesn’t show the changed code. Also, when I check the latest image pushed in the Container Registry, by navigating to Container Registry > Images, I can see a v2 of the image uploaded minutes ago. So the Container Registry does have the latest image I uploaded (meaning it overwritten the previous version 2 of the same image name in the Container Registry. Otherwise it should’ve failed if it couldn’t overwrite it). So I don’t quite understand, shouldn’t the last step (below) meant to deploy that last image v2 from the Container Registry?:

kubectl set image deployment/hello-web hello-web=gcr.io/${PROJECT_ID}/hello-app:v2

I’m not sure what I’m missing.

2- As I’m new to Docker, I was wondering is there a way to pull the image from the Container Registry, and view the source code in the Image. Otherwise, how is it possible to verify which Image contains which version of the source code? This can cause much confusion, with many versions of images along with equally many versions of source code history changes. How can you tell which source code commit corresponds to which Docker image?

3- Finally, can anyone advise on best practices of managing Kubernetes, in different scenarios. Say in the scenario where you deployed a container from an updated image version, but realised after there are issues or missing features, so you want to roll-back to the previous deployment. So is there best practises for managing such scenarios.

Apologies for the long-winded text, and many thanks in advance.

-- Hazzaldo
docker
google-cloud-platform
google-kubernetes-engine
kubernetes

2 Answers

6/30/2019

When you make a change to a deployment, Kubernetes only actually does something if you're making some change to the text of the deployment. In your example, when you kubectl set image ...:v2, since the deployment was already at v2, it sees that the current state of the pods matches what it expects and does nothing. If you kubectl delete pod that can cause them to get recreated, but again Kubernetes will see that it already has a v2 image on the node and start it again.

The simplest cleanest way to do this is to just decide you've published a v2, if not the v2 you expected, and build/push/deploy your changed image as v3.

(Also consider an image versioning scheme based on a source control commit ID or a datestamp, which will be easier to generate uniquely and statelessly.)

-- David Maze
Source: StackOverflow

6/30/2019

1-) Are you sure that :v2 isn't already running?

1a) revision is the revision of the Deployment. Every change you make increases the revision. Changing the image from :v1 to :v2 should change it.

2) you can launch any specific image with --entrypoint bash and browse around. If your code is readable you can read it as on any other computer. If it's compiled it gets harder.

On the other hand, you should trust your build process/build pipeline to properly tag the build and just checking the tag should be enough to be confident that the correct version is running.

-- Andreas Wederbrand
Source: StackOverflow