I have a kubernetes cluster on 1.4.6 and trying to configure the dynamic persistence volume based on glusterfs. I have created the glusterfs cluster and have the volume created as well.
gluster volume info
Volume Name: volume1
Type: Replicate
Volume ID: xxxxxxxxx
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: <host-1>:/gluster-storage
Brick2: <host-2>:/gluster-storage
Options Reconfigured:
performance.readdir-ahead: on
From the kubernetes side, a storageclass is created with the storageclass.beta.kubernetes.io/is-default-class as "true" and the provisioner is set as kubernetes.io/glusterfs. With this configuration, when the PVC is created, its pending and never gets bound. While checking the PV, there is no PV created using the gluster-storage driver mentioned in the storageclass.
Following are the yml files for reference.
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: slow
annotations:
storageclass.beta.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/glusterfs
parameters:
endpoint: "glusterfs-cluster"
resturl: "<Host IP for Gluster>"
restauthenabled: "false"
restuser: ""
restuserkey: ""
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-claim
annotations:
volume.alpha.kubernetes.io/storage-class: default
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
Has anybody done the dynamic provisioning using the glusterfs.
I think you should set the parameter volume.alpha.kubernetes.io/storage-class
value slow
in PersistentVolumeClaim
which you have set in StorageClass
.