Privately hosted Kubernetes storage?

4/15/2019

I'm looking for a solution for Kubernetes storage where I can use my UnRaid server as storage for my Kubernetes cluster. Has anyone done anything like this?

Any help would be much appreciated.

Thanks, Jamie

-- Jamie Bonnett
kubernetes
storage

3 Answers

4/15/2019

you can use ceph .I use it and it help me a lot. you can build a cluster from your storage and define replication. you can use incremental backup and snapshot by ceph

-- yasin lachini
Source: StackOverflow

1/2/2020

You can try Kadalu(https://kadalu.io) project.

Kadalu container storage is the solution to provide persistent storage for applications running in Kubernetes. This project uses GlusterFS to provide k8s storage but natively integrated with Kubernetes.

Install Kadalu Operator and then register the storage device. For example, below commands exposes storage device /dev/vdc from the node kube-node1.example.com(Which is part of k8s cluster). The operator deploys the CSI drivers, which are required to serve Persistent volume claims(PVC).

Install Kadalu Operator

[kube-master]# kubectl create -f https://kadalu.io/operator-latest.yaml

Register the storage device

[kube-master]# kubectl kadalu storage-add storage-pool-1 \
    --device kube-node1.example.com:/dev/vdc

Verify all required pods are running

[kube-master]# kubectl get pods -nkadalu
NAME                                READY   STATUS    RESTARTS   AGE
csi-nodeplugin-5hfms                3/3     Running   0          14m
csi-nodeplugin-924cc                3/3     Running   0          14m
csi-nodeplugin-cbjl9                3/3     Running   0          14m
csi-provisioner-0                   4/4     Running   0          14m
operator-577f569dc8-l2q6c           1/1     Running   0          15m
server-storage-pool-1-0-kube...     2/2     Running   0          11m

Thats it. Start claiming PVs!

Sample PV claim.

# File: sample-pvc.yaml
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: sample-pv
spec:
  storageClassName: kadalu.replica1
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 500M

Run below command to request for sample-pv

[kube-master]# kubectl create -f sample-pvc.yaml

Note: Kadalu also supports Replica3 configuration, which means registering three devices are required. Replica 3 provides high availability for applications even though one out of three storage nodes is down. For example,

[kube-master]# kubectl kadalu storage-add storage-pool-2 --type Replica3 \
    --device kube-node1.example.com:/dev/vdc
    --device kube-node2.example.com:/dev/vdc
    --device kube-node3.example.com:/dev/vdc

Hope this is useful. Feel free to open an issue or request for feature here https://github.com/kadalu/kadalu/issues

-- aravindavk
Source: StackOverflow

4/15/2019

Probably the only way is to use it an NFS Volume. This link gives you an idea on how to mount an Unraid NFS share.

Then you can follow the Kubernetes example on how to use an NFS Volume in a Pod.

Basically, your Unraid server will have an IP address and then you can mount the volume/path using that IP address on your Pod. For example:

kind: Pod
apiVersion: v1
metadata:
  name: pod-using-nfs
spec:
  # Add the server as an NFS volume for the pod
  volumes:
    - name: nfs-volume
      nfs: 
        # URL for the NFS server
        server: 10.108.211.244 # Change this!
        path: /

  # In this container, we'll mount the NFS volume
  # and write the date to a file inside it.
  containers:
    - name: app
      image: alpine

      # Mount the NFS volume in the container
      volumeMounts:
        - name: nfs-volume
          mountPath: /var/nfs

      # Write to a file inside our NFS
      command: ["/bin/sh"]
      args: ["-c", "while true; do date >> /var/nfs/dates.txt; sleep 5; done"]

You can also use a PVC if you'd like. For example:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteMany
  nfs:
    server: 10.108.211.244 # Change this!
    path: "/"

---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: nfs
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: ""
  resources:
    requests:
      storage: 10G

Then use it in your Deployment or Pod definition:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nfs-busybox
spec:
  replicas: 1
  selector:
    matchLabels:
      name: nfs-busybox
  template:
    metadata:
      labels:
        name: nfs-busybox
    spec:
      containers:
      - image: busybox
        imagePullPolicy: Always
        name: busybox
        volumeMounts:
          # name must match the volume name below
          - name: my-pvc-nfs
            mountPath: "/mnt"
      volumes:
      - name: my-pvc-nfs
        persistentVolumeClaim:
          claimName: nfs
-- Rico
Source: StackOverflow