Can I use existing GCE persistent disk in volumeClaimTemplate of Kubernetes Statefulset

9/4/2017

I am using Google Container Engine to run a StatefulSet for MongoDB replica set (3 replica pods).

This works fine with dynamic provisioning of persistent storage - that is new storage is provisioned for each pod when the stateful set is created.

But if I restart the StatefulSet, it seems I cannot re-bind the old persistent volumes, because new storage will be provisioned again. This means the data is lost. Ideally, the persistent storage should survive the deletion of the Kubernetes cluster itself, with the data preserved and ready to be re-used again in a new cluster.

Is there a way to create GCE Persistent disks and use them in the persistent volume claim of the StatefulSet?

[Updated 20 September 2017]

Found the answer: This is the solution (credit to @RahulKrishnan R A)

  1. create a storage class, specifying the underlying disk type and zone

  2. Create a PersistentVolume that specifies the Storage Class create above, and reference the persistent disk you wish to mount

  3. Create a PersistentVolumeClaim. It is important to name the PVC <pvc template name>-<statefulset name>-<ordinal number> . (The correct name is the trick!) Specify the volumeName as PV created above and storage class.

  4. Create as many PV and PVC as you have replicas with the correct name.
  5. Create the statefulSet with the PVC template.
-- conundrum
kubernetes
mongodb
persistent-volume-claims
statefulset

2 Answers

5/20/2019

Looks like using a new kubernetes (1.12) supports existing volumes which could be handy if you already have disks with data. For instance, my app doesn't have high db load and I'm happy with running replica set with 3 instances (PSA). For each of those I created statefulset with one replica and using existing gcePersistentDisk for PRIMARY and SECONDARY. Below is configuration for the second node:

apiVersion: v1
kind: Service
metadata:
  name: mongo-svc-b
spec:
  ports:
    - port: 27017
      targetPort: 27017
  clusterIP: None
  selector:
    app: si
    tier: db
    node: b
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mongo-b
spec:
  replicas: 1
  selector:
    matchLabels:
      app: si
      tier: db
      node: b
  serviceName: mongo-b
  template:
    metadata:
      labels:
        app: si
        tier: db
        node: b
    spec:
      containers:
        - name: mongo
          image: mongo:3.2
          command: ["mongod"]
          args: ["-replSet", "si"]
          ports:
            - containerPort: 27017
            - containerPort: 28017
          volumeMounts:
            - name: mongo-persistent-storage
              mountPath: /data/db
      volumes:
        - name: mongo-persistent-storage
          gcePersistentDisk:
            pdName: mongo-disk-b-green
            fsType: ext4
-- dimka
Source: StackOverflow

9/17/2017

Method 1 : Dynamic

You can add volume claim template as follows in statefulset.yaml file along with the deployment definition

volumeClaimTemplates:
    - metadata:
      name:storage
      annotations:
          volume.beta.kubernetes.io/storage-class: slow
      spec:
         accessModes: ["ReadWriteOnce"]
         resources:
            requests:
            storage: 10Gi

Create storage class storage.yaml file

kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
   name: slow
provisioner: kubernetes.io/gce-pd
parameters:
   type: pd-standard
   zone: asia-east1-a

Method 2 Static PV:

https://github.com/rahulkrishnanfs/percona-xtradb-statefulset-cluster-k8s/blob/master/percona.yml

Note: persistentVolumeReclaimPolicy: Retain use if you would like to retain the volume

Persistent Volumes can be provisioned statically by the administrator, or dynamically, based on the StorageClass resource

-- RahulKrishnan R A
Source: StackOverflow