Migrating K8S Stateful pods to new node-pool: what would happen to its GCEPersistentDisk resources?

9/22/2018

I have a cassandra stateful workload, and I would like to migrate it to a new node pool in the same GKE cluster. The persistent volume of each cassandra pod is backed by a GCEPersistentDisk resource.

During workload(i.e. cassandra pods) migration, what would happen to its underlying persistent volume? Will the underlying persistent volume get automatically moved to the new node as well? I'm assuming that each persistent volume(or GCEPersistentDisk resource) is bound to a GKE node.

Besides the regular migration commands(e.g. cordon old nodes, drain old nodes that run cassandra pods), are there any extra commands I should run to make sure that "data is not lost" during this pod migration?

-- twimo
cassandra
google-kubernetes-engine
kubernetes

1 Answer

9/22/2018

Short answer: The GCEPersistentDisks will move with your Cassandra pods.

So when a pod moves from one node to another, its GCEPersistentDisk is detached from the current node and then when it's scheduled on another node Kubernetes re-attaches it to that new node.

In the event that your current node is shut down abruptly, the GCEPersistentDisk will be released (detached) and eventually Kubernetes will schedule your workload on a new node and re-attach the disk. This is assuming that on your cloud provider you don't have selected the option of with the functionality relating to deleting the volume when the instance is terminated.

In summary, it should all work seamlessly given that Kubernetes talks to the cloud providers. More information here. Note that this is being deprecated in favor of the Kubernetes Cloud Controller Manager

-- Rico
Source: StackOverflow