Mounting k8 permanent volume fails silently

7/5/2018

I am trying to mount a PV into a pod with the following:

    {
  "kind": "PersistentVolume",
  "apiVersion": "v1",
  "metadata": {
    "name": "pv",
    "labels": {
      "type": "ssd1-zone1"
    }
  },
  "spec": {
    "capacity": {
      "storage": "150Gi"
    },
    "hostPath": {
      "path": "/mnt/data"
    },
    "accessModes": [
      "ReadWriteOnce"
    ],
    "persistentVolumeReclaimPolicy": "Retain",
    "storageClassName": "zone1"
  }
}

{
  "kind": "PersistentVolumeClaim",
  "apiVersion": "v1",
  "metadata": {
    "name": "pvc",
    "namespace": "clever"
  },
  "spec": {
    "accessModes": [
      "ReadWriteOnce"
    ],
    "resources": {
      "requests": {
        "storage": "150Gi"
      }
    },
    "volumeName": "pv",
    "storageClassName": "zone1"
  }
}
kind: Pod
apiVersion: v1
metadata:
  name: task-pv-pod
spec:
  volumes:
    - name: pv
      persistentVolumeClaim:
       claimName: pvc
  containers:
    - name: task-pv-container
      image: nginx
      ports:
        - containerPort: 80
          name: "http-server"
      volumeMounts:
        - mountPath: "/usr/share/nginx/html"
          name: pv

The pod creates propertly and uses the PVC claim without problem. When I ssh into the pod to see the mount, however, the size is 50Gb, which is the size of the attached storage and not the volume I specified.

root@task-pv-pod:/# df -aTh | grep "/html"
/dev/vda1      xfs       50G   13G   38G  26% /usr/share/nginx/html

The PVC appears to be correct to:

root@5139993be066:/# kubectl describe pvc pvc
Name:          pvc
Namespace:     default
StorageClass:  zone1
Status:        Bound
Volume:        pv
Labels:        <none>
Annotations:   kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"pvc","namespace":"default"},"spec":{"accessModes":["ReadWriteO...
               pv.kubernetes.io/bind-completed=yes
Finalizers:    []
Capacity:      150Gi
Access Modes:  RWO
Events:        <none>

I have deleted and recreated the volume and the claim many times and tried to use different images for my pod. Nothing is working.

-- Mathieu Nls
kubernetes
openstack
persistence

1 Answer

7/6/2018

It looks like your /mnt/data is on root partition, hence it provides the same free space as any other folder in rootfs.

The thing about requested and defined capacities for PV/PVC is that these are ony values for matching or hinting dynamic provisioner. In case of hostPath and manually created PV you can define 300TB and it will bind, even if real folder for hostPath has 5G as the real size of the device is not verified (which is reasonable, cause you just trust the data that is provided in PV).

So as I said, check if your /mnt/data is not just part of the rootfs. If you still have the problem provide output of mount command on the node where the pod is running.

-- Radek 'Goblin' Pieczonka
Source: StackOverflow