Setting up PVC in NFS, doesn't mount the set PVC size, instead sets the whole NFS volume size

8/5/2021

We are using NFS volume (GCP filestore with 1TB size) to set RWX Many access PVC in GCP, the problem here is: for example I allot a PVC of 5Gi and mount it to a nginx pod under /etc/nginx/test-pvc, instead of just allotting 5Gi it allots the whole NFS volume size.

I logged into the nginx pod and did a df -kh:

df -kh
Filesystem           Size  Used Avail Use% Mounted on
overlay               95G   16G   79G  17% /
tmpfs                 64M     0   64M   0% /dev
tmpfs                 63G     0   63G   0% /sys/fs/cgroup
shm                   64M     0   64M   0% /dev/shm
/dev/sda1             95G   16G   79G  17% /etc/hosts
10.x.10.x:/vol 1007G  5.0M  956G   1% /etc/nginx/test-pvc
tmpfs                 63G   12K   63G   1% /run/secrets/kubernetes.io/serviceaccount
tmpfs                 63G     0   63G   0% /proc/acpi
tmpfs                 63G     0   63G   0% /proc/scsi
tmpfs                 63G     0   63G   0% /sys/firmware

size of /etc/nginx/test-pvc is 1007G, which is my whole volume size in NFS(1 TB), it should have been 5G instead, even the used space 5MB isn't actually used in /etc/nginx/test-pvc. Why is the behaviour so ?

PV and PVC yaml used:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-nfs-test
spec:
  capacity:
    storage: 5Gi 
  accessModes:
  - ReadWriteOnce 
  nfs: 
    path: /vol
    server: 10.x.10.x
  persistentVolumeReclaimPolicy: Recycle 


apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-claim1
spec:
  accessModes:
    - ReadWriteOnce 
  storageClassName: ""
  resources:
    requests:
      storage: 5Gi
  volumeName: pv-nfs-test

Nginx deployment yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-pv-demo-depl
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nfs-pv-demo
  template:
    metadata:
      name: nfs-pv-pod
      labels:
        app: nfs-pv-demo
    spec:
      containers:
      - image: nginx
        name: nfs-pv-multi
        imagePullPolicy: Always
        name: ng
        volumeMounts:
          - name: nfs-volume-1
            mountPath: "/etc/nginx/test-pvc"
      volumes:
      - name: nfs-volume-1
        persistentVolumeClaim:
          claimName: nfs-claim1

Is there anything I'm missing ? Or is this the behaviour of NFS ? If so what is the best way to handle it in production, as we will have multiple other PVCs and could cause some confusions and volume denial issues.

-- Sanjay M. P.
google-cloud-filestore
google-cloud-platform
kubernetes
kubernetes-pvc
nfs

1 Answer

8/6/2021

Is there anything I'm missing ? Or is this the behaviour of NFS ?

No, nothing at all. This is simply the way it works. And it's nothing specific to NFS either.

5Gi of storage capacity defined in your PV can be treated more like a declaration that you have a PersistentVolume object which has 5 gigabtes of uderlying storage. But it's nothing more than just a declaration. You can not put any constraint on your available disk capacity this way. So if you have a disk that has actually 100 gigabytes of capacity it is a good practice to declare in this field of your PV definition 100Gi for the sake of consistency.

The storage capacity you set in your PVC is a bit different story. It can be understood as a minimum storage capacity that would satisfy your request for storage. So if you have let's say 3 different PVs which have following capacities (declared in PV definition, no matter what their real capacity is): 3Gi, 10Gi and 100Gi and you claim for 5Gi in your PersistentVolumeClaim, only 2 of them i.e. 10Gi and 100Gi can satisfy such request. And as I said above, it doesn't matter that the smallest one which has 3Gi declared is in fact backed with quite a large disk which has 1000Gi. If you defined a PV object which represents such disk in kubernetes environment (and makes it available to be consumed by some PVC and in the end by some Pod which uses it) and you declared that this particular PV has only 3Gi of capacity, PVC in which you request for 5Gi has no means to verify the actual capacity of the disk and "sees" such volume as the one with not enough capacity to satisfy the request made for 5Gi.

To illustrate that it isn't specific to NFS, you can create a new GCE persistent disk of 100 gigabytes (e.g. via cloud console as it seems the easiest way) and then you can use such disk in a PV and PVC which in the end will be used by simple nginx pod. This is described here.

So you may declare in your PV 10Gi (in PVC at most 10Gi then) although your GCE persistent disk has in fact the capacity of 100 gigs. And if you connect to such pod, you won't see the declared capacity of 10Gi but the real capacity of the disk. And it's completely normal and works exactly as it was designed.

You may have thought that it works similar to LVM where you create a volume group consisting of one or more disks and than you can create as many logical volumes as your underlying capacity allows you. PVs in kubernetes don't allow you to do anything like this. Capacity that you "set" in a PV definition is only a declaration, not a constraint of any kind. If you need to mount separate chunks of a huge disks into different pods, you would need to divide it into partitions first and create separate PV objects, each one out of a single partition.

-- mario
Source: StackOverflow