kubernetes creating empty directories on nfs


I have mongodb pod running on a cluster. I have setup the pod volume as nfs share. The /data directory of mongodb is being stored on nfs host and the path in nfs host is /var/nfs/general. The problem is I can find db & configdb inside /var/nfs/general but they are empty. The tree structure of dirs can be found below:

└── general
    ├── configdb
    └── db 

I am using nfs for writing the data/logs to my nfs server. So these applications can have nested directories for logs.

The file /etc/exports looks like this:

# /etc/exports: the access control list for filesystems which may be exported
#       to NFS clients.  See exports(5).
# Example for NFSv2 and NFSv3:
# /srv/homes       hostname1(rw,sync,no_subtree_check) hostname2(ro,sync,no_subtree_check)
# Example for NFSv4:
# /srv/nfs4        gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check)
# /srv/nfs4/homes  gss/krb5i(rw,sync,no_subtree_check)
/var/nfs/general   *(rw,no_subtree_check)

Any help/suggestion would be appreciated.

-- Akshay Sood

1 Answer


Have you specified the correct NFS mount options? (bg, nolock, and noatime) See here:

In Kubernetes 1.13, you can specify "mountOptions" in your NFS PersistentVolume definition. Example below taken directly from Kuberenetes documentation:

apiVersion: v1
kind: PersistentVolume
  name: pv0003
    storage: 5Gi
  volumeMode: Filesystem
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  storageClassName: slow
    - hard
    - nfsvers=4.1
    path: /tmp

Running a database on NFS is generally not recommended as you could run into:

  1. Degraded performance since locking and storing data in network file systems is generally slower than local FS. Additionally, MongoDB also uses the MMAPv1 engine based on memory mapped files that very frequently writes to disk.

  2. Possible data loss when there are multiple mounts writing to the same database.

Hope this helps!

-- Frank Yucheng Gu
Source: StackOverflow