I would like to expand my persistent volume for a deployment on my GKE cluster running v1.12.5. I changed the storage class to enable volume expansion:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: standard
parameters:
type: pd-standard
provisioner: kubernetes.io/gce-pd
allowVolumeExpansion: true
reclaimPolicy: Delete
I changed the size of my PVC afterwards:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: webcontent
namespace: k8s-test
annotations:
volume.alpha.kubernetes.io/storage-class: default
spec:
accessModes: [ReadWriteOnce]
resources:
requests:
storage: 2Gi ---> Old size was 1Gi
I checked the status of my pvc like this AFTER I deleted the pod. The pod was then restarted by the replicas settings within the deployment within seconds:
kubectl get pvc -n k8s-test -o yaml
apiVersion: v1
items:
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{"volume.alpha.kubernetes.io/storage-class":"default"},"name":"webcontent","namespace":"k8s-test"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"2Gi"}}}}
pv.kubernetes.io/bind-completed: "yes"
pv.kubernetes.io/bound-by-controller: "yes"
volume.alpha.kubernetes.io/storage-class: default
volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/gce-pd
creationTimestamp: "2019-03-26T13:32:22Z"
finalizers:
- kubernetes.io/pvc-protection
name: webcontent
namespace: k8s-test
resourceVersion: "12957822"
selfLink: /api/v1/namespaces/k8s-test/persistentvolumeclaims/webcontent
uid: 95dcdcba-4fcb-11e9-97fd-42010aa400b9
spec:
accessModes:
- ReadWriteOnce
dataSource: null
resources:
requests:
storage: 2Gi
storageClassName: standard
volumeName: pvc-95dcdcba-4fcb-11e9-97fd-42010aa400b9
status:
accessModes:
- ReadWriteOnce
capacity:
storage: 1Gi
conditions:
- lastProbeTime: null
lastTransitionTime: "2019-03-26T15:35:28Z"
message: Waiting for user to (re-)start a pod to finish file system resize of
volume on node.
status: "True"
type: FileSystemResizePending
phase: Bound
kind: List
metadata:
resourceVersion: ""
selfLink: ""
So it seems like it only waits for the file system resize. As already mentioned I deleted the pod several times and I also set the replicas value to 0 to terminate the pod but the file system resize didn't kick in.
What am I doing wrong?
You mentioned that you checked the status after deleting the pod. As per the Kubernetes documentation, the status must first be FileSystemResizePending, then the pod can be restarted by either deleting or scaling down the deployment then up.