Is there a way to make kubectl apply restart deployments whose image tag has not changed?

9/5/2017

I've got a local deployment system that is mirroring our production system. Both are deployed by calling kubectl apply -f deployments-and-services.yaml

I'm tagging all builds with the current git hash, which means that for clean deploys to GKE, all the services have a new docker image tag which means that apply will restart them, but locally to minikube the tag is often not changing which means that new code is not run. Before I was working around this by calling kubectl delete and then kubectl create for deploying to minikube, but as the number of services I'm deploying has increased, that is starting to stretch the dev cycle too far.

Ideally, I'd like a better way to tell kubectl apply to restart a deployment rather than just depending on the tag?

I'm curious how people have been approaching this problem.

Additionally, I'm building everything with bazel which means that I have to be pretty explicit about setting up my build commands. I'm thinking maybe I should switch to just delete/creating the one service I'm working on and leave the others running.

But in that case, maybe I should just look at telepresence and run the service I'm dev'ing on outside of minikube all together? What are best practices here?

-- macrael
bazel
kubernetes
minikube

2 Answers

7/16/2019

Kubernetes, only triggers a deployment when something has changed, if you have image pull policy to always you can delete your pods to get the new image, if you want kube to handle the deployment you can update the kubernetes yaml file to container a constantly changing metadata field (I use seconds since epoch) which will trigger a change. Ideally you should be tagging your images with unique tags from your CI/CD pipeline with the commit reference they have been built from. this gets around this issue and allows you to take full advantage of the kubernetes rollback feature.

-- HackyPenguin
Source: StackOverflow

9/5/2017

I'm not entirely sure I understood your question but that may very well be my reading comprehension :) In any case here's a few thoughts that popped up while reading this (again not sure what you're trying to accomplish)

Option 1: maybe what you're looking for is to scale down and back up, i.e. scale your deployment to say 0 and then back up, given you're using configmap and maybe you only want to update that, the command would be kubectl scale --replicas=0 -f foo.yaml and then back to whatever

Option 2: if you want to apply the deployment and not kill any pods for example, you would use the cascade=false (google it)

Option 3: lookup the rollout option to manage deployments, not sure if it works on services though

Finally, and that's only me talking, share some more details like which version of k8s are you using? maybe provide an actual use case example to better describe the issue.

-- Naim Salameh
Source: StackOverflow