In Kubernetes, how can I wait for updated code to be deployed in a cluster?

11/17/2017

I have a working Kubernetes cluster hosted in the Google Kubernetes Engine. I can apply new code to the cluster by building a Docker image with the new code, pushing the image (with a new tag) to the Google Container Registry, and then kubectl apply-ing an updated manifest (using the new tag) to the cluster.

I have a manual step (specifically, rails db:migrate to apply new migrations to the database) that I want to happen once the new code has been deployed. I can execute this command just fine as a kubectl exec command. But there's a problem: I need to run it against the new image, not the old image. Running it against the old image will succeed, in the sense that the exit code will be 0, but it will not apply the new migrations, which are only present in the new image.

My question: How can I wait for new code to be live in a cluster before I run a kubectl exec command against it?

Note that I can't do a kubectl run using the new image, because the container by itself can't access the database; it needs the "sidecar" CloudSQL proxy container running in the same pod.

One more constraint: I need to do this in a scripting environment (e.g. bash), so repeatedly running a command and examining the output manually isn't an option. Doing an automatic, scripted loop until some condition is met does count as a solution, but I'd really prefer something that doesn't require me to poll in my script.

A sleep of some duration could sort of solve this problem, but there are problems with that approach:

  • The sleep time might not be long enough for everything to finish (and the time that takes is variable and difficult to predict).
  • If I sleep longer than I need to, the Rails app will refuse to work because it will recognize that there are database migrations that haven't been applied—which should be considered downtime for the app.
-- Jeff Terrell Ph.D.
kubernetes

1 Answer

11/17/2017

If I have understood the problem correctly then I think init containers will solve your problem fairly well. Init containers allow you to do certain tasks before a pod is started.

So in your case, something to tunes of the following will serve the purpose (Code copied from the link above):

apiVersion: v1 kind: Pod metadata: name: rails-app labels: app: myapp annotations: pod.beta.kubernetes.io/init-containers: '[ { "name": "init-myservice", "image": "new-image", "command": ["sh", "-c", "rails db:migrate"] } ]' spec: containers: - name: myapp-rails-app image: new-image

Let's run over how this will work:

  • You will build and push the new image of the app to registry. You will have to modify the pod definition to have init containers as above of course.
  • You will apply the new version via kubectl to deployment
  • The init container uses same image as that your app uses, but is used only for the execution of rails DB migration.
  • After that actual app starts.

Even if you have N replicas of the the pod with a deployment or replication controller object - then the migration might run N times (Once for each pod). It is not intended or best path but wont have any issues as the first pod that comes up would have done migration anyway.

-- Vishal Biyani
Source: StackOverflow