Can a Persistent Volume be resized?

10/31/2016

I'm running a MySQL deployment on Kubernetes however seems like my allocated space was not enough, initially I added a persistent volume of 50GB and now I'd like to expand that to 100GB.

I already saw the a persistent volume claim is immutable after creation, but can I somehow just resize the persistent volume and then recreate my claim?

-- perrohunter
google-kubernetes-engine
kubernetes

6 Answers

12/17/2018

Yes, as of 1.11, persistent volumes can be resized on certain cloud providers. To increase volume size:

  1. Edit the PVC (kubectl edit pvc $your_pvc) to specify the new size. The key to edit is spec.resources.requests.storage:

enter image description here

  1. Terminate the pod using the volume.

Once the pod using the volume is terminated, the filesystem is expanded and the size of the PV is increased. See the above link for details.

-- Dmitry Minkovsky
Source: StackOverflow

1/11/2018

It is possible in Kubernetes 1.9 (alpha in 1.8) for some volume types: gcePersistentDisk, awsElasticBlockStore, Cinder, glusterfs, rbd

It requires enabling the PersistentVolumeClaimResize admission plug-in and storage classes whose allowVolumeExpansion field is set to true.

See official docs at https://kubernetes.io/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims

-- csanchez
Source: StackOverflow

11/1/2016

Update: volume expansion is available as a beta feature starting Kubernetes v1.11 for in-tree volume plugins. It is also available as a beta feature for volumes backed by CSI drivers as of Kubernetes v1.16.

If the volume plugin or CSI driver for your volume support volume expansion, you can resize a volume via the Kubernetes API:

  1. Ensure volume expansion is enabled for the StorageClass (allowVolumeExpansion: true is set on the StorageClass) associated with your PVC.
  2. Request a change in volume capacity by editing your PVC (spec.resources.requests).

For more information, see:


No, Kubernetes does not support automatic volume resizing yet.

Disk resizing is an entirely manual process at the moment.

Assuming that you created a Kubernetes PV object with a given capacity and the PV is bound to a PVC, and then attached/mounted to a node for use by a pod. If you increase the volume size, pods would continue to be able to use the disk without issue, however they would not have access to the additional space.

To enable the additional space on the volume, you must manually resize the partitions. You can do that by following the instructions here. You'd have to delete the pods referencing the volume first, wait for it to detach, than manually attach/mount the volume to some VM instance you have access to, and run through the required steps to resize it.

Opened issue #35941 to track the feature request.

-- Saad Ali
Source: StackOverflow

5/25/2018

There is some support for this in 1.8 and above, for some volume types, including gcePersistentDisk and awsBlockStore, if certain experimental features are enabled on the cluster.

For other volume types, it must be done manually for now. In addition, support for doing this automatically while pods are online (nice!) is coming in a future version (currently slated for 1.11):

For now, these are the steps I followed to do this manually with an AzureDisk volume type (for managed disks) which currently does not support persistent disk resize (but support is coming for this too):

  1. Ensure PVs have reclaim policy "Retain" set.
  2. Delete the stateful set and related pods. Kubernetes should release the PVs, even though the PV and PVC statuses will remain Bound. Take special care for stateful sets that are managed by an operator, such as Prometheus -- the operator may need to be disabled temporarily. It may also be possible to use Scale to do one pod at a time. This may take a few minutes, be patient.
  3. Resize the underlying storage for the PV(s) using the Azure API or portal.
  4. Mount the underlying storage on a VM (such as the Kubernetes master) by adding them as a "Disk" in the VM settings. In the VM, use e2fsck and resize2fs to resize the filesystem on the PV (assuming an ext3/4 FS). Unmount the disks.
  5. Save the JSON/YAML configuration of the associated PVC.
  6. Delete the associated PVC. The PV should change to status Released.
  7. Edit the YAML config of the PV, after which the PV status should be Available:
    1. specify the new volume size in spec.capacity.storage,
    2. remove the spec.claimref uid and resourceVersion fields, and
    3. remove status.phase.
  8. Edit the saved PVC configuration:
    1. remove the metadata.resourceVersion field,
    2. remove the metadata pv.kubernetes.io/bind-completed and pv.kubernetes.io/bound-by-controller annotations, and
    3. change the spec.resources.requests.storage field to the updated PV size, and
    4. remove all fields inside status.
  9. Create a new resource using the edited PVC configuration. The PVC should start in Pending state, but both the PV and PVC should transition relatively quickly to Bound.
  10. Recreate the StatefulSet and/or change the stateful set configuration to restart pods.
-- Raman
Source: StackOverflow

11/28/2017

In terms of PVC/PV 'resizing', that's still not supported in k8s, though I believe it could potentially arrive in 1.9

It's possible to achieve the same end result by dealing with PVC/PV and (e.g.) GCE PD though..

For example, I had a gitlab deployment, with a PVC and a dynamically provisioned PV via a StorageClass resource. Here are the steps I ran through:

  1. Take a snapshot of the PD (provided you care about the data)
  2. Ensure the ReclaimPolicy of the PV is "Retain", patch if necessary as detailed here: https://kubernetes.io/docs/tasks/administer-cluster/change-pv-reclaim-policy/
  3. kubectl describe pv <name-of-pv> (useful when creating the PV manifest later)
  4. Delete the deployment/pod (probably not essential, but seems cleaner)
  5. Delete PVC and PV
  6. Ensure PD is recognised as being not in use by anything (e.g. google console, compute/disks page)
  7. Resize PD with cloud provider (with GCE, for example, this can actually be done at an earlier stage, even if the disk is in use)
  8. Create k8s PersistentVolume manifest (this had previously been done dynamically via the use of the StorageClass resource). In the PersistentVolume yaml spec, I had "gcePersistentDisk: pdName: <name-of-pd>" defined, along with other details that I'd grabbed at step 3. make sure you update the spec.capacity.storage to the new capacity you want the PV to have (although not essential, and has no effect here, you may want to update the storage capacity/value in your PVC manifest, for posterity)
  9. kubectl apply (or equivalent) to recreate your deployment/pod, PVC and PV

note: some steps may not be essential, such as deleting some of the existing deployment/pod.. resources, though I personally prefer to remove them, seeing as I know the ReclaimPolicy is Retain, and I have a snapshot.

-- eversMcc
Source: StackOverflow

1/10/2018

Yes, it can be, after version 1.8, have a look at volume expansion here

Volume expansion was introduced in v1.8 as an Alpha feature

-- Ravindranath Akila
Source: StackOverflow