We use local persistent storage as storage backend for SOLR pods. The pods are redundantly scheduled to multiple kubernetes nodes. If one of the nodes go down there are always enough instances on other nodes.
How can we drain these nodes (without "migrating" the SOLR pods to other nodes) in case we want to do a maintenance on a node? The most important thing for us would be that kube-proxy would no longer send new requests to the pods on the node in question so that after some time we could do the maintenance without interrupting service for running requests.
We tried cordon but cordon will only make sure no new pods are scheduled to a node. Drain does not seem to work with pods with local persistent volumes.
You can check out pod anti-affinity.
https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
These constructs allow you to repel or attract pods when certain conditions are met.
In your case the pod anti-affinity 'requiredDuringSchedulingIgnoredDuringExecution' maybe your best bet. I haven't personally used it yet, i hope it can lead you to the right direction.