I've created GCP's disk form a snapshot and now I'm trying to resize it using PVC in kubernetes: 100GB -> 400GB. I've applied:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: restored-resize
parameters:
type: pd-standard
provisioner: kubernetes.io/gce-pd
allowVolumeExpansion: true
reclaimPolicy: Retain
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: restored-graphite
spec:
storageClassName: restored-resize
capacity:
storage: 400G
accessModes:
- ReadWriteOnce
gcePersistentDisk:
pdName: dev-restored-graphite
fsType: ext4
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: restored-graphite
spec:
# It's necessary to specify "" as the storageClassName
# so that the default storage class won't be used, see
# https://kubernetes.io/docs/concepts/storage/persistent-volumes/#class-1
storageClassName: restored-resize
volumeName: restored-graphite
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 400G
Status in PVC shows 400G:
(...)
status:
accessModes:
- ReadWriteOnce
capacity:
storage: 400G
phase: Bound
However pod mounts previous disk value:
/dev/sdc 98.4G 72.8G 25.6G 74% /opt/graphite/storage
What am I doing wrong?
To me seems that you have setted 400G directly on the manifest, but as the manual said, you should had edited only
resources:
requests:
storage: 400G
https://kubernetes.io/blog/2018/07/12/resizing-persistent-volumes-using-kubernetes/
and those, triggering the new condition: FileSystemResizePending
As of Kubernetes v1.11, those PVC will autoresize in time after some time in this status, due to that, you shouldn't even have to restart the pod bounded to the pc.
But, again on your problem: i would edit this way the manifes:
spec:
storageClassName: restored-resize
capacity:
storage: 100G
in order for the system to reload the old config and notice that the situation is not as he thinks. or at least, that is what i would try (on another environment, not production for sure.)