I have an inhouse Kubernetes cluster running on bare-metal and it consists of 5 nodes (1 master and 4 workers). I set up an NFS server on the master natively and launched the nfs-client in the K8s to have nfs dynamic provisioner. Everything are working properly and I am able to use my applications just by defining a persistent volume claim BUT I can't find my data on the disk.
Every time I launch an application, the nfs-client creates a new directory at the path of my nfs server with the correct name but all of these directories are empty. So my question is where are my data?
I am using the helm chart of the nfs client. This is an example of the created but empty directory at my nfs server path:
/var/nfs/general$ tree
.
├── 166-postgres-claim-pvc-37146254-db50-4293-a9f7-13097689610a
│ └── data
├── 166-registry-claim-pvc-fe337e34-d9a5-4266-8178-f67973894584
├── 166-registry-slave-claim-registry-slave-0-pvc-b18d430b-e1fc-4eeb-bd12-cab9340bed69
├── 166-rtspdata-claim-pvc-bf9bc1e3-412f-4627-ade4-50817478308e
├── 172-postgres-claim-pvc-087538cf-5b67-4789-8d8b-117d41c3fe02
│ └── data
├── 172-registry-claim-pvc-7b7d9bb6-a636-4f78-b2fe-924473cb47ab
├── 172-registry-slave-claim-registry-slave-0-pvc-34e62524-fca0-48dd-ba29-b4cf178ca028
├── 172-rtspdata-claim-pvc-211a1aac-409f-431c-b78d-5b87b9017625
├── 173-postgres-claim-pvc-b901449a-0ce7-4ecf-8dfc-e6371dd3a9b4
│ └── data
├── 173-registry-claim-pvc-cd842cde-a3f7-4d54-94d6-c018e42ec495
├── 173-rtspdata-claim-pvc-a95c5748-ebed-4045-98b2-a04e534e0cf6
├── archived-161-postgres-claim-pvc-01cc1ff2-8cc8-4161-8d85-00cb6562e10e
│ └── data
├── archived-161-registry-claim-pvc-9b626e01-a565-4214-b94e-b7ba1e206a5e
├── archived-161-rtspdata-claim-pvc-b079c7e2-248e-4245-b243-5ff7dc3afa82
├── archived-162-postgres-claim-pvc-188af7ca-106d-4f2f-8905-9d7b391e9dce
│ └── data
├── archived-162-postgres-claim-pvc-356e4632-19e2-4ac9-8400-e00d39621b7c
│ └── data
├── archived-162-postgres-claim-pvc-45372032-979f-4ced-be35-15ec67a322b7
│ └── data
├── archived-162-postgres-claim-pvc-6d5e1f01-ad5b-45cc-9eef-654275e3ecd2
│ └── data
├── archived-162-postgres-claim-pvc-cbf4d4ca-b9d1-4d1c-88be-621eeb3680fb
│ └── data
├── archived-162-postgres-claim-pvc-eaa32a4c-9768-469a-ad85-1e1b682c376d
│ └── data
├── archived-162-postgres-claim-pvc-f517586b-e132-4a38-8ec9-18f6d5ca000e
│ └── data
├── archived-162-registry-claim-pvc-1796642a-d639-4ede-8204-1779c029aa4e
│ └── rethinkdb_data
Yesterday I tested my cluster with another shared pvc and I was able to see the data on the disk (which proves that everything is good). My suggestion is for some reason RethinkDB pod (which was the component that I needed PV for it) is not able to mount the PV and it is just using the local storage and this is what I want to investigate more.