Can't keep data on my persistent volume in Kubernetes (Google Cloud)

3/31/2020

I have a Redis pod on my Kubernetes cluster on Google Cloud. I have built pv and the claim.

kind: PersistentVolume
apiVersion: v1
metadata:
  name: redis-pv
  labels:
    type: local
spec:
  storageClassName: manual
  capacity:
    storage: my-size 
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  labels:
    app: postgres
  name: redis-pv-claim
spec:
  storageClassName: manual
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: my size 

I also mounted it in my deployment.yaml

volumeMounts:
      - mountPath: /data
        name: redis-pv-claim
    volumes:
    - name: redis-pv-claim
      persistentVolumeClaim:
        claimName: redis-pv-claim  

I can't see any error while running describe pod

Volumes:
  redis-pv-claim:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  redis-pv-claim
    ReadOnly:   false

But it just can't save any key. After every deployment, the "/data" folder is just empty.

My NFS is active now but i still cant keep data .

Describe pvc

Namespace:     my namespace 
StorageClass:  nfs-client
Status:        Bound
Volume:        pvc-5d278b27-a51e-4262-8c1b-68b290b21fc3
Labels:        <none>
Annotations:   pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
               volume.beta.kubernetes.io/storage-class: nfs-client
               volume.beta.kubernetes.io/storage-provisioner: cluster.local/ext1-nfs-client-provisioner
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      1Gi
Access Modes:  RWX
VolumeMode:    Filesystem
Mounted By:    my grafana pod
Events:        <none>

Describe pod gives me an error though.

Warning  FailedMount  18m   kubelet, gke-devcluster-pool-1-36e6a393-rg7d  MountVolume.SetUp failed for volume "pvc-5d278b27-a51e-4262-8c1b-68b290b21fc3" : mount failed: exit status 1
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/8f7b6630-ed9b-427a-9ada-b75e1805ed60/volumes/kubernetes.io~nfs/pvc-5d278b27-a51e-4262-8c1b-68b290b21fc3 --scope -- /
home/kubernetes/containerized_mounter/mounter mount -t nfs 192.168.1.21:/mnt/nfs/development-test-claim-pvc-5d278b27-a51e-4262-8c1b-68b290b21fc3 /var/lib/kubelet/pods/8f7b6630-ed9b-427a-9ada-b75e1805ed60
/volumes/kubernetes.io~nfs/pvc-5d278b27-a51e-4262-8c1b-68b290b21fc3
Output: Running scope as unit: run-ra5925a8488ef436897bd44d526c57841.scope
Mount failed: mount failed: exit status 32
Mounting command: chroot
-- Pasha
kubernetes
persistent-storage
redis

1 Answer

4/1/2020

What is happening is that when you have multiple nodes using PVC to share files between pods isn't the best approach.

PVCs can share files between pods residing in the same node. So if I have multiple nodes sometimes I may have the impression that my files aren't being stored properly.

The ideal solution for you is to use any DSF solution available. In your question you mention that you are using GCP and it's not clear if you are using GKE or if you created your cluster on top of compute instances.

If you are using GKE, have you already checked this document? Please let me know.

If you have access to your nodes, the easiest setup you can have is to create a NFS server in one of your nodes and use nfs-client-provisioner to provide access to the nfs server from your pods.

I've been using this approach for quite a while now and it works really well.

1 - Install and configure NFS Server on my Master Node (Debian Linux, this might change depending on your Linux distribution):

Before installing the NFS Kernel server, we need to update our system’s repository index:

$ sudo apt-get update

Now, run the following command in order to install the NFS Kernel Server on your system:

$ sudo apt install nfs-kernel-server

Create the Export Directory

$ sudo mkdir -p /mnt/nfs_server_files

As we want all clients to access the directory, we will remove restrictive permissions of the export folder through the following commands (this may vary on your set-up according to your security policy):

$ sudo chown nobody:nogroup /mnt/nfs_server_files
$ sudo chmod 777 /mnt/nfs_server_files

Assign server access to client(s) through NFS export file

$ sudo nano /etc/exports

Inside this file, add a new line to allow access from other servers to your share.

/mnt/nfs_server_files        10.128.0.0/24(rw,sync,no_subtree_check)

You may want to use different options in your share. 10.128.0.0/24 is my k8s internal network.

Export the shared directory and restart the service to make sure all configuration files are correct.

$ sudo exportfs -a
$ sudo systemctl restart nfs-kernel-server

Check all active shares:

$ sudo exportfs
/mnt/nfs_server_files
                10.128.0.0/24

2 - Install NFS Client on all my Worker Nodes:

$ sudo apt-get update
$ sudo apt-get install nfs-common

At this point you can make a test to check if you have access to your share from your worker nodes:

$ sudo mkdir -p /mnt/sharedfolder_client
$ sudo mount kubemaster:/mnt/nfs_server_files /mnt/sharedfolder_client

Notice that at this point you can use the name of your master node. K8s is taking care of the DNS here. Check if the volume mounted as expected and create some folders and files to male sure everything is working fine.

$ cd /mnt/sharedfolder_client
$ mkdir test
$ touch file

Go back to your master node and check if these files are at /mnt/nfs_server_files folder.

3 - Install NFS Client Provisioner.

Install the provisioner using helm:

$ helm install --name ext --namespace nfs --set nfs.server=kubemaster --set nfs.path=/mnt/nfs_server_files stable/nfs-client-provisioner

Notice that I've specified a namespace for it. Check if they are running:

$ kubectl get pods -n nfs
NAME                                         READY   STATUS      RESTARTS   AGE
ext-nfs-client-provisioner-f8964b44c-2876n   1/1     Running     0          84s

At this point we have a storageclass called nfs-client:

$ kubectl get storageclass -n nfs
NAME         PROVISIONER                                AGE
nfs-client   cluster.local/ext-nfs-client-provisioner   5m30s

We need to create a PersistentVolumeClaim:

$ more nfs-client-pvc.yaml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  namespace: nfs 
  name: test-claim
  annotations:
    volume.beta.kubernetes.io/storage-class: "nfs-client"
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Mi
$ kubectl apply -f nfs-client-pvc.yaml

Check the status (Bound is expected):

$ kubectl get persistentvolumeclaim/test-claim -n nfs
NAME         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
test-claim   Bound    pvc-e1cd4c78-7c7c-4280-b1e0-41c0473652d5   1Mi        RWX            nfs-client     24s

4 - Create a simple pod to test if we can read/write out NFS Share:

Create a pod using this yaml:

apiVersion: v1
kind: Pod
metadata:
  name: pod0
  labels:
    env: test
  namespace: nfs  
spec:
  containers:
  - name: nginx
    image: nginx
    imagePullPolicy: IfNotPresent
    volumeMounts:
      - name: nfs-pvc
        mountPath: "/mnt"
  volumes:
    - name: nfs-pvc
      persistentVolumeClaim:
        claimName: test-claim
$ kubectl apply -f pod.yaml

Let's list all mounted volumes on our pod:

$ kubectl exec -ti -n nfs pod0 -- df -h /mnt
Filesystem                                                                               Size  Used Avail Use% Mounted on
kubemaster:/mnt/nfs_server_files/nfs-test-claim-pvc-a2e53b0e-f9bb-4723-ad62-860030fb93b1   99G   11G   84G  11% /mnt

As we can see, we have a NFS volume mounted on /mnt. (Important to notice the path kubemaster:/mnt/nfs_server_files/nfs-test-claim-pvc-a2e53b0e-f9bb-4723-ad62-860030fb93b1)

Let's check it:

root@pod0:/# cd /mnt
root@pod0:/mnt# ls -la
total 8
drwxrwxrwx 2 nobody nogroup 4096 Nov  5 08:33 .
drwxr-xr-x 1 root   root    4096 Nov  5 08:38 ..

It's empty. Let's create some files:

$ for i in 1 2; do touch file$i; done;
$ ls -l 
total 8
drwxrwxrwx 2 nobody nogroup 4096 Nov  5 08:58 .
drwxr-xr-x 1 root   root    4096 Nov  5 08:38 ..
-rw-r--r-- 1 nobody nogroup    0 Nov  5 08:58 file1
-rw-r--r-- 1 nobody nogroup    0 Nov  5 08:58 file2

Now let's where are these files on our NFS Server (Master Node):

$ cd /mnt/nfs_server_files
$ ls -l 
total 4
drwxrwxrwx 2 nobody nogroup 4096 Nov  5 09:11 nfs-test-claim-pvc-4550f9f0-694d-46c9-9e4c-7172a3a64b12
$ cd nfs-test-claim-pvc-4550f9f0-694d-46c9-9e4c-7172a3a64b12/
$ ls -l 
total 0
-rw-r--r-- 1 nobody nogroup 0 Nov  5 09:11 file1
-rw-r--r-- 1 nobody nogroup 0 Nov  5 09:11 file2

And here are the files we just created inside our pod!

Please let me know if this solution helped you.

-- mWatney
Source: StackOverflow