I have mongodb pod running on a cluster. I have setup the pod volume as nfs share. The /data
directory of mongodb is being stored on nfs host and the path in nfs host is /var/nfs/general
. The problem is I can find db
& configdb
inside /var/nfs/general
but they are empty. The tree structure of dirs can be found below:
nfs/
└── general
├── configdb
└── db
I am using nfs for writing the data/logs to my nfs server. So these applications can have nested directories for logs.
The file /etc/exports
looks like this:
# /etc/exports: the access control list for filesystems which may be exported
# to NFS clients. See exports(5).
#
# Example for NFSv2 and NFSv3:
# /srv/homes hostname1(rw,sync,no_subtree_check) hostname2(ro,sync,no_subtree_check)
#
# Example for NFSv4:
# /srv/nfs4 gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check)
# /srv/nfs4/homes gss/krb5i(rw,sync,no_subtree_check)
#
/var/nfs/general *(rw,no_subtree_check)
Any help/suggestion would be appreciated.
Have you specified the correct NFS mount options? (bg, nolock, and noatime) See here:
In Kubernetes 1.13, you can specify "mountOptions" in your NFS PersistentVolume definition. Example below taken directly from Kuberenetes documentation:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0003
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
storageClassName: slow
mountOptions:
- hard
- nfsvers=4.1
nfs:
path: /tmp
server: 172.17.0.2
Running a database on NFS is generally not recommended as you could run into:
Degraded performance since locking and storing data in network file systems is generally slower than local FS. Additionally, MongoDB also uses the MMAPv1 engine based on memory mapped files that very frequently writes to disk.
Possible data loss when there are multiple mounts writing to the same database.
Hope this helps!