I have a set of pods in my kubernetes environment that are acting as "buffered resources" (identified by not having a certain label).
In my application (using kubernetes-client), I like to check if a buffered resource is available and if so, add a label so that it is no longer considered for other requests.
However, given parallelism, a pod that is marked as a buffered resource, might be reserved by multiple threads at the same time, leading to all kinds of issues in the application.
Without locking the requests being made to kubernetes, is there a safe way to add a label only if its key does not exist already (and fail otherwise)?
I'm using io.fabric8.kubernetes.client
and the code to update labels is more or less:
kubernetesClient.services().inNamespace(namespace).withName(resourceName).edit() //
.editMetadata()
/**/.addToLabels(Collections.unmodifiableMap(labels)) //
.endMetadata() //
.done();
What is the best approach to handle concurrency when talking to the kubernetes api?
Edit: I see that k8s has ResourceVersion
but from my first tests this does not seem to work as expected:
The following query does NOT fail but succeeds and even assigns a new resource-version:
kubernetesClient.services().inNamespace(namespace).withName(resourceName).edit() //
.editMetadata()
.withResourceVersion("13213414141") // definitely does not match existing one
/**/.addToLabels(Collections.unmodifiableMap(labels)) //
.endMetadata() //
.done();
Edit2: The kubectl equivalent is something like:
kubectl label pods mypod foo=bar --namespace my-name --resource-version="313"
which will correctly throw an error "the object has been modified; please apply your changes to the latest version and try again"