How to make glusterfs survive cluster upgrade

5/9/2018

I'm trying to use glusterfs installed directly on my GCE cluster nodes. The installation does not persist through cluster upgrades, which could be solved with a bootstrap script. The problem is that when I did reinstall the glusterfs manually and mounted the brick, there was no volumes present, which I had to force recreate.

What happened? Does glusterfs store volume data somewhere else than on bricks? How do I prevent this?

-- Jan Imrich
glusterfs
google-cloud-platform
google-compute-engine
google-kubernetes-engine

1 Answer

5/19/2018

Can I confirm you are doing this on a Kubernetes cluster? I presume you are as you mentioned cluster upgrades.

If so, when you say gluster was installed directly on your nodes, I'm not sure I understand that part of your post. My understanding of the intended use of glusterfs is that it's exists as a distributed file system, and the storage is therefore part of a separate cluster to the Kubernetes nodes.

I believe this is the recommended method to use glusterfs with Kubernetes, and this way the data in the volumes will be retained after the Kubernetes cluster upgrade.

Here are the steps I performed.

I created the glusterfs cluster using the information/script from the first three steps in this this tutorial (specially the 'Clone' 'Bootstrap your Cluster' and 'Create your first volume' steps). In terms of the YAML below, It may be useful to know my glusterfs volume was named 'glustervolume'.

Once I'd confirmed the gluster volume had been created, I created Kubernetes and service and end points that point at that volume. The IP addresses in the the end point section of the YAML below are the internal IP addresses of the instances in the glusterfs storage cluster.

---
apiVersion: v1
kind: Service
metadata:
  name: glusterfs-cluster
spec:
  ports:
  - port: 1
---
apiVersion: v1
kind: Endpoints
metadata:
  name: glusterfs-cluster
subsets:
  - addresses:
      - ip: 10.132.0.6
    ports:
      - port: 1
  - addresses:
      - ip: 10.132.0.7
    ports:
      - port: 1
  - addresses:
      - ip: 10.132.0.8
    ports:
      - port: 1

I then created a pod to make use of the gluster volume:

---
apiVersion: v1
kind: Pod
metadata:
  name: glusterfs
spec:
  containers:
  - name: glusterfs
    image: nginx
    volumeMounts:
    - mountPath: "/mnt/glusterfs"
      name: glustervolume
  volumes:
  - name: glustervolume
    glusterfs:
      endpoints: glusterfs-cluster
      path: glustervolume
      readOnly: false

As the glusterfs volume exists separately to the Kubernetes cluster (i.e. on it's own cluster), Kubernetes upgrades will not affect the volume.

-- neilH
Source: StackOverflow