Elasticsearch deployment on kubernetes using Persistent Volume

10/11/2019

I am trying to deploy a Elasticsearch cluster(replicas: 3) using Statefulset in kubernetes and need to store the Elasticsearch data in a Persistent Volume (PV). Since each Elasticsearch instance has its own data folder, I need to have separate data folder for each replica in the PV. I am trying to use volumeClaimTemplates and mountPath: /usr/share/elasticsearch/data but this is resulting in an error: pod has unbound immediate PersistentVolumeClaims in the second pod. Hence how can I achieve this using Statefulset?

Thanks in advance.

-- Newbie
elasticsearch
kubernetes

2 Answers

10/11/2019

If you are using dynamic provisioning then you can get the volume created automatically at backend, like disk is storage for PVs in Azure ( for Read Write Once kind of operations), else you need to create that manually

Once you create the volume, just create a pvc in the appropriate namespace which is of size matching the pv, then you are just supposed to pass the volume name in pvc definition, it will get bound automatically.

You can try something like this -

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: claimName
  namespace: namespace
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: default
  volumeName: pv-volumeName
status:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 1Gi

Please share if you still face issues

-- Tushar Mahajan
Source: StackOverflow

10/14/2019

There is no information how you are trying to install elastic-search however: As an example please follow:

As per documentation for StatefulSet - limitations:

The storage for a given Pod must either be provisioned by a PersistentVolume Provisioner based on the requested storage class, or pre-provisioned by an admin.

This looks like your example, problem with dynamic storage provisioning.

Please verify storage class, if pv and pvc were created and bind together and storage class in volumeClaimTemplates:

    volumeMounts:
      - name: "elasticsearch-master"
        mountPath: /usr/share/elasticsearch/data
 volumeClaimTemplates:
  - metadata:
      name: elasticsearch-master
    spec:
      accessModes:
      - ReadWriteOnce
      storageClassName: name #please refer to this settings to see if you are using default storage class. In other case you should spceify this parameter manually
      resources:
        requests:
          storage: 30Gi

Hope this help.

-- Hanx
Source: StackOverflow