How to configure elasticsearch snapshots using persistent volumes as the "shared file system repository" in Kubernetes(on GCP)?

1/15/2019

I have registered the snapshot repository and have been able to create snapshots of the cluster for a pod. I have used a mounted persistent volume as the "shared file system repository" as the backup storage.

However in a production cluster with multiple nodes, it is required that the shared file system is mounted for all the data and master nodes. Hence I would have to mount the persistent volume for the data nodes and the master nodes.

But Kubernetes persistent volumes don't have a "read write many" option. So can't mount it on all the nodes and hence am unable to register the snapshot repository. Is there a way to use persistent volumes as the backup snapshot storage for a production elastic search cluster in Google Kubernetes Engine?

-- Ben Abey
elasticsearch
google-kubernetes-engine
kubernetes

1 Answer

1/17/2019

Reading this, I guess that you are using a cluster created on your own and not GKE, since you cannot install agents on master nodes and workers will get recreated whenever there is a node pool update. Please make this clear since it can be misleading.

There are multiple volumes that allow multiple readers, such as cephfs, glusterfs and nfs. You can take a look at the different volume types on this

-- ozrlz
Source: StackOverflow