Snapshotting on google cloud/Kubernetes when using storageClass persistent volumes

8/12/2017

StorageClasses are the new method of specifying dynamic persistent volume claim (PVC) dependencies within Kubernetes. This avoids the need to explicitly provision one directly with the cloud provider (in my case Google Container Engine (GKE)).

Definition for the StorageClasses (GKE already has a default for standard class)

---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: fast
provisioner: kubernetes.io/gce-pd
parameters:
  type: pd-ssd
  zone: europe-west1-b

Definition for the actual PVC

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-server-pvc
  namespace: staging
spec:
  accessModes: [ "ReadWriteOnce" ]
  resources:
    requests:
      storage: 100Gi
  storageClassName: "standard"

Here is the result of kubernetes get storageclass:

NAME                 TYPE
fast                 kubernetes.io/gce-pd
standard (default)   kubernetes.io/gce-pd

Here is the result of kubernetes get pvc:

NAME             STATUS    VOLUME                                     CAPACITY   ACCESSMODES   STORAGECLASS   AGE
nfs-pvc          Bound     nfs                                        1Mi        RWX                          119d
nfs-server-pvc   Bound     pvc-905a810b-3f13-11e7-82f9-42010a840072   100Gi      RWO           standard       81d

I would like to continue taking snapshots of the volumes but the dynamic nature of the volume names created (in this case pvc-905a810b-3f13-11e7-82f9-42010a840072), mean i cannot continue with the following command that i had been using via cron (note the "nfs" name is now incorrect):

gcloud compute --project "XXX-XXX" disks snapshot "nfs" --zone "europe-west1-b" --snapshot-names "nfs-${DATE}"

I guess this boils down to Kubernetes allowing explicit naming through StorageClass-based PVC. The docs don't seem to allow this. Any ideas?

-- elmpp
google-cloud-platform
google-kubernetes-engine
kubernetes

1 Answer

8/14/2017

One approach is to manually create the PV and give it a stable name that you can use in your scripts. You can use gcloud commands to create the underlying PD disks. When you create the PV, give it a label:

apiVersion: "v1"
kind: "PersistentVolume"
metadata:
  name: my-pv-0
  labels:
    pdName: my-pv-0
spec:
  capacity:
    storage: "10Gi"
  accessModes:
    - "ReadWriteOnce"
  storageClassName: fast
  gcePersistentDisk:
    fsType: "ext4"
    pdName: "my-pd-0"

Then attach it to the PVC using a selector:

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: my-pvc-0
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
  storageClassName: fast
  selector:
    matchLabels:
      pdName:  my-pv-0
-- Warren Strange
Source: StackOverflow