LiquiBase and Kubernetes database rolling updates

6/24/2018

Let's say I have a database with schema of v1, and an application which is tightly coupled to that schema of v1. i.e. SQLException is thrown if the records in the database don't match the entity classes.

How should I deploy a change which alters the database schema, and deploys the application which having a race condition. i.e. user queries the app for a field which no longer exists.

-- aclokay
kubernetes
liquibase

1 Answer

6/24/2018

This problem actually isn't specific to kubernetes, it happens in any system with more than one server -- kubernetes just makes it more front-and-center because of how automatic the rollover is. The words "tightly coupled" in your question are a dead giveaway of the real problem here.

That said, the "answer" actually will depend on which of the following mental models are better for your team:

  • do not make two consecutive schemas contradictory
  • use a "maintenance" page that keeps traffic off of the pods until they are fully rolled out
  • just accept the SQLExceptions and add better retry logic to the consumers

We use the first one, because the kubernetes rollout is baked into our engineering culture and we know that pod-old and pod-new will be running simultaneously and thus schema changes need to be incremental and backward compatible for at minimum one generation of pods.

However, sometimes we just accept that the engineering effort to do that is more cost than the 500s that a specific breaking change will incur, so we cheat and scale the replicas low, then roll it out and warn our monitoring team that there will be exceptions but they'll blow over. We can do that partially because the client has retry logic built into it.

-- mdaniel
Source: StackOverflow