I am building a platform on top of Kubernetes that, among other requirements, should:
I'm addressing the 1st point by using static binaries for k8s components and container engine. Coupled with minimal host tooling that's also static binaries.
I'm still looking for a solution for persistent storage.
What I evaluated/used so far:
So the question is what other option do I have for Kubernetes persistent storage while using the cluster node disks.
You can use OpenEBS Local PV which can consume entire disk for an application using default storage class openebs-device
and you can consume the mounted disk for sharing multiple application using default storage class openebs-hostpath
. More information is provided in OpenEBS documentation under User Guide
section. This does not require open-iscsi. If you are using a direct device, then using OpenEBS Node Disk Manager, disk will be automatically detected and consumed. For meeting RWM use case, you can consume this provisioned volume using Local PV as underneath volume for multiple application using NFS provisioner. The implementation of the same is mentioned under OpenEBS documentation under Stateful Application
section.
According to official documentation as of now (v1.16) K8S supports WriteMany on a few different types of volumes.
Namely these are: cephfs, glusterfs and nfs
In general, with all of these the content of a volume is preserved and the volume is merely unmounted when a Pod is removed. This means that a volume can be pre-populated with data, and that data can be “handed off” between Pods. These FS can be mounted by multiple writers simultaneously.
Among these FS the glusterfs can be deployed on each kubernetes cluster Node (at least 3 required). Data can be accessed in different ways one of which is NFS.
A persistentVolumeClaim
volume is used to mount a PersistentVolume into a Pod. PersistentVolumes are a way for users to “claim” durable storage (such as a GCE PersistentDisk or an iSCSI volume) without knowing the details of the particular cloud environment ReadWriteMany is supported with following types of volumes: - AzureFile - CephFS - Glusterfs - Quobyte - NFS - PortworxVolume
but that's not an option with no control of the underlying infrastructure.
The local
volume option represents a mounted local storage device such as a disk, partition or directory. Local volumes can only be used as a statically created PersistentVolume. The drawback is that if a node becomes unhealthy, then the local volume will also become inaccessible, and a Pod using it will not be able to run.
So at the moment there is no solution that suits all the requirements out of the box.
The below options can be considered
kubernetes version 1.14.0 on wards supports local persistent volumes. You can make use of local pv's using node labels. You might have to run stateful work loads in HA ( master-slave ) mode so the data would be available in case of node failures
You can install nfs server on one of the cluster node and use it as storage for your work loads. nfs storage supports ReadWriteMany. This might work well if you setup the cluster on baremetal
Rook is also one of the good option which you have already tried but it is not production ready though.
Among the three, first option suits your requirement. Would like to hear any other options from the community.