GKE Kubernetes Persistent Volume

7/1/2018

I try to use a persistent volume for my rethinkdb server. But I got this error:

Unable to mount volumes for pod "rethinkdb-server-deployment-6866f5b459-25fjb_default(efd90244-7d02-11e8-bffa-42010a8400b9)": timeout expired waiting for volumes to attach/mount for pod "default"/"rethinkdb-server-deployment-
Multi-Attach error for volume "pvc-f115c85e-7c42-11e8-bffa-42010a8400b9" Volume is already used by pod(s) rethinkdb-server-deployment-58f68c8464-4hn9x

I think that Kubernetes deploy a new node without removed the old one so it can't share le volume between both because my pvc is ReadWriteOnce. This persistent volume must be create in an automatic way, so I can't use persistent disk, format it ...

My configuration:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  namespace: default
  name: rethinkdb-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 30Gi



apiVersion: apps/v1beta1
kind: Deployment
metadata:
  namespace: default
  labels:
    db: rethinkdb
    role: admin
  name: rethinkdb-server-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: rethinkdb-server
  template:
    metadata:
      name: rethinkdb-server-pod
      labels:
        app: rethinkdb-server
    spec:
      containers:
      - name: rethinkdb-server
        image: gcr.io/$PROJECT_ID/rethinkdb-server:$LAST_VERSION
        env:
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        ports:
        - containerPort: 8080
          name: admin-port
        - containerPort: 28015
          name: driver-port
        - containerPort: 29015
          name: cluster-port
        volumeMounts:
        - mountPath: /data/rethinkdb_data
          name: rethinkdb-storage
      volumes:
       - name: rethinkdb-storage
         persistentVolumeClaim:
          claimName: rethinkdb-pvc

How do you manage this?

-- fstn
google-kubernetes-engine
kubernetes
persistent-volumes

1 Answer

7/13/2018

I see that you’ve added the PersistentVolumeClaim within a deployment. I also see that you are trying to scale the node pool.

A PersistentVolumeClaim will work on a deployment, but only if you are not scaling the deployment. This is why that error message showed up. The error that you are seeing says that that volume is already in use by an existing pod when a new pod is replicated.

Because you are trying to scale the deployment, other replicas will try to mount and use the same volume.

Solution: Deploy the PersistentVolumeClaim in a statefulset object, not a deployment. Instructions on how to deploy a statefulset can be found in this article. With a statefulset, you will be able to attach a PersistentVolumeClaim to a pod, then scale the node pool.

-- Mahmoud Sharif
Source: StackOverflow