I set up GlusterFS storage on two VirtualBox VMs following a mixture of these two guides:
https://wiki.centos.org/HowTos/GlusterFSonCentOS
http://docs.gluster.org/en/latest/Quick-Start-Guide/Quickstart/
The internal network has DHCP assigned IPs of 10.10.10.4 and 10.10.10.5. I've verified that the storage is working correctly and as expected.
At this point, I'm attempting to add a new StorageClass to Kubernetes using a YAML file, and the only resources I can find on the subject are relating to OpenShift specifically or Heketi GlusterFS.
I started creating the storage class as follows:
# storage-class.yml
apiVersion: storage.k8s.io/v1beta1
kind: StorageClass
metadata:
name: gluster-container
provisioner: kubernetes.io/glusterfs
parameters:
resturl: "10.10.10.5"
I'm assuming this is incomplete but am unable to find more information about how to configure this. When I attempt to test with it, I get the following error:
ProvisioningFailed - Failed to provision volume with StorageClass "gluster-container": create volume error: error creating volume Post 10.10.10.5/volumes: unsupported protocol scheme ""
Anyone have any ideas how to proceed from here?
I created a script to manage the glusterFS volume-claims:
if [[ "$#" -le 3 ]]; then
echo "Usage:"
echo " $0 <operation> <namespace> <name> <size>"
echo " - operation: create | delete"
exit
fi
OPERATION=$1
NAMESPACE=$2
NAME=$3
SIZE=$4
function create {
gluster volume create $NAMESPACE-$NAME replica 3 server1:/mnt/gluster-storage/brick-$NAMESPACE-$NAME server2:/mnt/gluster-storage/brick-$NAMESPACE-$NAME server3:/mnt/gluster-storage/brick-$NAMESPACE-$NAME
gluster volume start $NAMESPACE-$NAME
kubectl -n $NAMESPACE apply -f /etc/kubernetes/glusterfs-endpoints.yml
kubectl -n $NAMESPACE apply -f /etc/kubernetes/glusterfs-service.yml
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: PersistentVolume
metadata:
name: $NAME
namespace: $NAMESPACE
spec:
capacity:
storage: $SIZE
accessModes:
- ReadWriteMany
glusterfs:
endpoints: glusterfs-cluster
path: $NAMESPACE-$NAME
readOnly: false
persistentVolumeReclaimPolicy: Retain
claimRef:
namespace: $NAMESPACE
name: $NAME
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: $NAME
namespace: $NAMESPACE
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: $SIZE
EOF
}
function delete {
kubectl -n $NAMESPACE delete pvc $NAME
kubectl delete pv $NAME
yes | gluster volume stop $NAMESPACE-$NAME
echo
yes | gluster volume delete $NAMESPACE-$NAME
echo
echo "#################################################################"
echo "REMOVE BRICKS MANUALLY:"
echo " server1:/mnt/gluster-storage/brick-$NAMESPACE-$NAME"
echo " server2:/mnt/gluster-storage/brick-$NAMESPACE-$NAME"
echo " server3:/mnt/gluster-storage/brick-$NAMESPACE-$NAME"
echo "#################################################################"
}
case $OPERATION in
create)
create
;;
delete)
delete
;;
esac
This creates the gluster volume and maps it to a volume claim in Kubernetes. So you can use glusterfs mounts without the need for automatic provisioning.
Make sure to run kubelet outside docker otherwise you'll be using an old version of gluster.fuse that misses a lot of the optimizations of the modern versions.
The GlusterFS provisioner in kubernetes wants to dynamically provision GlusterFS volumes - such as below:
gluster volume create glustervol1 replica 2 transport tcp
gluster1.example.com:/bricks/brick1/brick \
gluster2.example.com:/bricks/brick1/brick.
GlusterFS in itself does not have an API endpoint to trigger the commands to create these volumes; however, the community has developed Heketi to be the API endpoint of GlusterFS. The RESTful management interface endpoint of Heketi is the value of resturl
in your kubernetes StorageClass
.
As @Vishal Biyani commented, http://blog.infracloud.io/gluster-heketi-kubernetes/ is a write-up on how to quickly get started with Heketi on GCP and connect that into kubernetes.
If the dynamic provisioning of GlusterFS volumes is not needed in your environment, you could use the NFS StorageClass and point to the load balancer in front of your GlusterFS. You would still get awesomeness of GlusterFS replication and distribution, but it would require you enable the NFS gluster service and manually create each gluster volume you want to expose to kubernetes.