How to reattach released PersistentVolume in Kubernetes

5/29/2019

Here is my overall goal:

  • Have a MongoDB running

  • Persist the data through pod failures / updates etc

The approach I’ve taken:

  • K8S Provider: Digital Ocean

  • Nodes: 3

  • Create a PVC

  • Create a headless Service

  • Create a StatefulSet

Here’s a dumbed down version of the config:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: some-pvc
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
  storageClassName: do-block-storage
---
apiVersion: v1
kind: Service
metadata:
  name: some-headless-service
  labels:
    app: my-app
spec:
  ports:
  - port: 27017
    name: my-app-database
  clusterIP: None
  selector:
    app: my-app
    tier: database
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: my-app-database
  labels:
    app: my-app
    tier: database
spec:
  serviceName: some-headless-service
  replicas: 1
  selector:
    matchLabels:
      app: my-app
      tier: database
  template:
    metadata:
      labels:
        app: my-app
        tier: database
    spec:
      containers:
      - name: my-app-database
        image: mongo:latest
        volumeMounts:
        - name: some-volume
          mountPath: /data
        ports:
        - containerPort: 27017
          name: my-app-database
      volumes:
      - name: some-volume
        persistentVolumeClaim:
          claimName: some-pvc

This is working as expected. I can spin down the replicas to 0:

kubectl scale —replicas=0 statefulset/my-app-database

Spin it back up:

kubectl scale —replicas=1 statefulset/my-app-database

And the data will persist..

But one time, as I was messing around by scaling the statefulset up and down, I was met with this error:

Volume is already exclusively attached to one node and can't be attached to another

Being new to k8s, I deleted the PVC and “recreated” the same one:

kubectl delete pvc some-pvc
kubectl apply -f persistent-volume-claims/

The statefulset spun back up with a new PV and the old PV was deleted as the persistentVolumeReclaimPolicy was set to Delete by default.

I set this new PV persistentVolumeReclaimPolicy to Retain to ensure that the data would not be automatically removed.. and I realized: I’m not sure how I’d reclaim that PV. Earlier to get through the “volume attachment” error, I deleted the PVC, which will just create another new PV with the setup I have, and now I’m left with my data in that Released PV.

My main questions are:

  • Does this overall sound like the right approach for my goal?

  • Should I look into adding a claimRef to the dynamically created PV and then recreating a new PVC with that claimRef, as mentioned here: Can a PVC be bound to a specific PV?

  • Should I be trying to get that fresh statefulset PVC to actually use that old PV?

  • Would it make sense to try to reattach the old PV to the correct node, and how would I do that?

-- Jason Awbrey
kubernetes
statefulset

1 Answer

5/30/2019

If your want to use StatefulSet with scalability, your storage should also support this, there are two way to handle this:

  • If do-block-storage storage class supprt ReadWriteMany, then put all pod's data in single volume.

  • Each pod use a different volume, add volumeClaimTemplate to your StatefulSet.spec, then k8s will create PVC like some-pvc-{statefulset_name}-{idx} automatically:

spec:
  volumeClaimTemplates:
  - metadata:
      name: some-pvc
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 5Gi
      storageClassName: do-block-storage

Update:

StatefulSet replicas Must deploy with mongodb replication, then each pod in StatefulSet will has same data storage.

So when container run mongod command, you must add option --replSet={name}. when all pods up, execute command rs.initiate() to tell mongodb how to handle data replication. When you scale up or down StatefulSet, execute command rs.add() or rs.remove() to tell mongodb members has changed.

-- menya
Source: StackOverflow