I have a Kubernetes based application consisting of multiple services (and pods) managed with helm chart.
Postgres is used as a database for all services.
When application is upgraded to a newer version, I am running a db migration script via initContainers
.
The problem occurs when migration scripts require an exclusive access to DB (all other connections should be terminated), otherwise the script is blocked.
Ideal solution would be to stop all pods, run the migration and recreate them. But I am not sure how to achieve it properly with Kubernetes.
Tnx
Ideal solution would be to stop all pods, run the migration and recreate them. But I am not sure how to achieve it properly with Kubernetes.
This largely depends on your approach, specifically on your CI/CD tools. There are several strategies that you can apply, but, as an illustration, presuming you have Gitlab pipeline (Jenkins could do the same, terminology is different, etc) here are the steps:
This same principle can be exercised in other orchestration/deployment tools as well, and you can even make a simple script to run those kubectl commands directly in one go if previous one is successful.
Ideal solution would be to stop all pods, run the migration and recreate them. But I am not sure how to achieve it properly with Kubernetes.
I see from one of the comments that you use Helm, so I'd like to propose a solution leveraging Helm's hooks:
Helm provides a hook mechanism to allow chart developers to intervene at certain points in a release's life cycle. For example, you can use hooks to:
Load a ConfigMap or Secret during install before any other charts are loaded.
Execute a Job to back up a database before installing a new chart, and then execute a second job after the upgrade in order to restore data.
Run a Job before deleting a release to gracefully take a service out of rotation before removing it.
https://helm.sh/docs/topics/charts_hooks/
You could package your migration as a k8s Job
and leverage the pre-install
or pre-upgrade
hook to run the job. These hooks runs after templates are rendered, but before any new resources are created in Kubernetes. Thus, your migrations will run before your Pods are deployed.
To delete the deployments prior to running your migrations, create a second pre-install/pre-upgrade hook with a lower helm.sh/hook-weight
that deletes the target deployments:
apiVersion: batch/v1
kind: Job
metadata:
name: "pre-upgrade-hook1"
annotations:
"helm.sh/hook": pre-upgrade
"helm.sh/hook-weight": "-1"
"helm.sh/hook-delete-policy": hook-succeeded
spec:
template:
metadata:
name: "pre-upgrade-hook1"
spec:
restartPolicy: Never
serviceAccountName: "<an SA with delete RBAC permissions>"
containers:
- name: kubectl
image: "lachlanevenson/k8s-kubectl:latest"
command: ["delete","deployment","deploy1","deploy2"]
The lower hook-weight will ensure this job runs prior to the migration job. This will ensure the following series of events:
helm upgrade
Just make sure to keep all of the relevant Deployments in the same Chart.
From an automation/orchestration perspective, my sense is that problems like this are intended to be solved with Operators, using the recently released Operator Framework:
https://github.com/operator-framework
The idea is that there would be a Postgres Migrations Operator- which to my knowledge doesn't exist as yet- which would lie idle waiting for a Custom Resource Definition describing the migration to be posted to the cluster/namespace.
The Operator would wake up, understand what's involved in the intended migration, do some analysis on the cluster to construct a migration plan, and then perform the steps as you describe-
That doesn't help you now, though.