Kubernetes getting cluster locks over a replication

12/22/2015

I have an application running in a Kubernetes pod that is replicated using a replication controller. However I need to some critical tasks that should be done by a single application (one replication) at a time. Previously I used zookeeper to get a cluster lock to do that task. Is there a way in Kubernetes to get a cluster lock for a particular replication controller?

-- Dimuthu
kubernetes

1 Answer

12/22/2015

Kubernetes doesn't have a cluster lock object, but you can use an annotation on the replication controller to specify the lock holder and TTL.

For example, each pod could read the the annotation key "lock", and if empty (or if TTL expired), try to write "lock": "pod-xyz: 2015-12-22T18:39:12+00:00". If multiple writes are attempted, kubernetes will accept one, and reject the others w/ a 409 because the resource version will not be correct. The lock holder would then continue updating the annotation to refresh the TTL.

If you have a service that corresponds to this replication controller, it might make sense to put the lock annotation on the service instead of the RC. Then the locking semantics would survive software upgrades (e.g. rolling-update). The annotation can go on any object, so there's some flexibility to figure out what works best for you.

podmaster.go had a good example of the logic you might use to implement this. It is running directly against etcd, which you could also do if you don't mind introducing another component.

-- CJ Cullen
Source: StackOverflow