volumeClaimTemplates and vsphereVolume

7/14/2021

Trying to understand if a PersistentVolume created by using volumeClaimTemplates in StatefulSet would make that mount available on every worker node or only on the nodes where the pod is scheduled?

Here is a scenario. A StatefulSet with volumeClaimTemplates, 2 replicas, and a default storage class of type ‘kubernetes.io/vsphere-volume’ provisioner. The pods are currently deployed on node 1 & 2. What happens to the data when the pods are scheduled on different nodes, let’s say node 3 & 4, either during an upgrade or existing nodes being down? Would the data written to the mount by the old pods be accessible for the new pods? If it’s a local volume, the data is on the nodes but not sure how vsphereVolume works in this case.

I am assuming that the persistent volume provisioner, in this case, vsphereVolume makes the mount/data available to all the nodes, but couldn't confirm if that's the case. I will try to test this with a MySQL or PostgreSQL.

Thanks

-- cnu
kubernetes
persistent-volumes

1 Answer

7/15/2021

I see it in the documentation, at the end of Writing to Stable Storage section, somehow missed this earlier.

Even though web-0 and web-1 were rescheduled, they continue to serve their hostnames because the PersistentVolumes associated with their PersistentVolumeClaims are remounted to their volumeMounts. No matter what node web-0and web-1 are scheduled on, their PersistentVolumes will be mounted to the appropriate mount points.

Based on StatefulSet page it's clear that all the nodes will have access to the mount/data. If a pod is deleted and re-created it's scheduled on the same node.

-- cnu
Source: StackOverflow