Persistent disk problem on Kubernetes GCP

2/15/2019

I'm working in Kubernetes in GCP and I'm having problems with volumes and persistent disks.

I'm using Directus 7 (CMS Headless), which saves most of its information in the database except the files that are uploaded, these files are in the /var/www/html/public/uploads folder (tested locally with docker-compose and works fine), and that folder is the one I'm trying to save on the persistent disk.

No error occurs but when restart the Kubernetes Pod i lose the uploaded images (they are not being saved on the disk).

This is my configuration:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: directus-pv
  namespace: default
spec:
  storageClassName: ""
  capacity:
    storage: 100G
  accessModes:
    - ReadWriteOnce
  gcePersistentDisk:
    pdName: directus-disk
    fsType: ext4

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: directus-pvc
  namespace: default
  labels:
    app: .....
spec:
  storageClassName: ""
  volumeName: directus-pv
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 100G

And in the deploy.yaml:

    volumeMounts:
      - name: api-disk
        mountPath: /var/www/html/public/uploads
        readOnly: false

  volumes:
  - name: api-disk
    persistentVolumeClaim:
      claimName: directus-pvc

Thanks for the help

-- pdsm
directus
docker
google-cloud-platform
kubernetes
persistent-storage

2 Answers

2/16/2019

Remove namespace property from pv and pvc manifest. They are shared resources in the cluster. Remove storage class property as well.

-- P Ekambaram
Source: StackOverflow

2/21/2019

I presume that your manually provisioned persistence volume directus-pv, is being created somehow with PersistentVolumeReclaimPolicy=*Recycle. That's the only possible reason that could cause data erase on each POD restart.

I'm not able to reproduce your case with the provided manifest files, but I tried the following test:

  1. Create gcePersistentDisk
  2. Create PersistentVolume
  3. Create PersistentVolumeClaim
  4. Create ReplicaSet (replicas=1) like this one
apiVersion: extensions/v1beta1
kind: ReplicaSet
metadata:
  name: busybox-list-uploads
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: busybox-list-uploads
        version: "2"
    spec:
      containers:
        - image: busybox
          args: [/bin/sh, -c, 'sleep 9999' ]
          volumeMounts:
            - mountPath: /var/www/html/public/uploads
              name: api-disk
          name: busybox
      volumes:
      - name: api-disk
        persistentVolumeClaim:
          claimName: directus-pvc
  1. Write some file into mounted folder /var/www/html/public/uploads
  2. Restart POD (=kill the POD) by resizing replica to 0 then to 1
  3. List content of /var/www/html/public/uploads on newly created POD

for i in busybox-list-uploads-dgfbc; do kubectl exec -it $i -- ls /var/www/html/public/uploads; done; lost+found picture_from_busybox-list-uploads-ng4t6.png

As you can see output shows clearly, that data survives POD restart

* you can verify it with cmd: kubectl get pv/directus-pv -o yaml

-- Nepomucen
Source: StackOverflow