Google Kubernetes Engine: Not seeing mount persistent volume in the instance

3/23/2018

I created a 200G disk with the command gcloud compute disks create --size 200GB my-disk

then created a PersistentVolume

apiVersion: v1
kind: PersistentVolume
metadata:
  name: my-volume
spec:
  capacity:
    storage: 200Gi
  accessModes:
    - ReadWriteOnce
  gcePersistentDisk:
    pdName: my-disk
    fsType: ext4

then created a PersistentVolumeClaim

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-claim
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 200Gi

then created a StatefulSet and mount the volume to /mnt/disks, which is an existing directory. statefulset.yaml:

apiVersion: apps/v1beta2
kind: StatefulSet
metadata:
  name: ...
spec:
    ...
    spec:
      containers:
      - name: ...
        ...
        volumeMounts:
        - name: my-volume
          mountPath: /mnt/disks
      volumes:
      - name: my-volume
        emptyDir: {}
  volumeClaimTemplates:
  - metadata:
      name: my-claim
    spec:
      accessModes: \[ "ReadWriteOnce" \]
      resources:
        requests:
          storage: 200Gi

I ran command kubectl get pv and saw that disk was successfully mounted to each instance

NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                                    STORAGECLASS   REASON    AGE
my-volume                                  200Gi      RWO            Retain           Available                                                                     19m
pvc-17c60f45-2e4f-11e8-9b77-42010af0000e   200Gi      RWO            Delete           Bound       default/my-claim-xxx\_1   standard                 13m
pvc-5972c804-2e4e-11e8-9b77-42010af0000e   200Gi      RWO            Delete           Bound       default/my-claim                         standard                 18m
pvc-61b9daf9-2e4e-11e8-9b77-42010af0000e   200Gi      RWO            Delete           Bound       default/my-claimxxx\_0   standard                 18m

but when I ssh into an instance and run df -hT, I do not see the mounted volume. below is the output:

Filesystem     Type      Size  Used Avail Use% Mounted on
/dev/root      ext2      1.2G  447M  774M  37% /
devtmpfs       devtmpfs  1.9G     0  1.9G   0% /dev
tmpfs          tmpfs     1.9G     0  1.9G   0% /dev/shm
tmpfs          tmpfs     1.9G  744K  1.9G   1% /run
tmpfs          tmpfs     1.9G     0  1.9G   0% /sys/fs/cgroup
tmpfs          tmpfs     1.9G     0  1.9G   0% /tmp
tmpfs          tmpfs     256K     0  256K   0% /mnt/disks
/dev/sda8      ext4       12M   28K   12M   1% /usr/share/oem
/dev/sda1      ext4       95G  3.5G   91G   4% /mnt/stateful\_partition
tmpfs          tmpfs     1.0M  128K  896K  13% /var/lib/cloud
overlayfs      overlay   1.0M  148K  876K  15% /etc

anyone has any idea?

Also worth mentioning that I'm trying to mount the disk to a docker image which is running in kubernete engine. The pod was created with below commands:

docker build -t gcr.io/xxx .
gcloud docker -- push gcr.io/xxx
kubectl create -f statefulset.yaml

The instance I sshed into is the one that runs the docker image. I do not see the volume in both instance and the docker container

UPDATE I found the volume, I ran df -ahT in the instance, and saw the relevant entries

/dev/sdb       -               -     -     -    - /var/lib/kubelet/plugins/kubernetes.io/gce-pd/mounts/gke-xxx-cluster-c-pvc-61b9daf9-2e4e-11e8-9b77-42010af0000e
/dev/sdb       -               -     -     -    - /var/lib/kubelet/plugins/kubernetes.io/gce-pd/mounts/gke-xxx-cluster-c-pvc-61b9daf9-2e4e-11e8-9b77-42010af0000e
/dev/sdb       -               -     -     -    - /home/kubernetes/containerized\_mounter/rootfs/var/lib/kubelet/plugins/kubernetes.io/gce-pd/mounts/gke-xxx-cluster-c-pvc-61b9daf9-2e4e-11e8-9b77-42010af0000e
/dev/sdb       -               -     -     -    - /home/kubernetes/containerized\_mounter/rootfs/var/lib/kubelet/plugins/kubernetes.io/gce-pd/mounts/gke-xxx-cluster-c-pvc-61b9daf9-2e4e-11e8-9b77-42010af0000e
/dev/sdb       -               -     -     -    - /var/lib/kubelet/pods/61bb679b-2e4e-11e8-9b77-42010af0000e/volumes/kubernetes.io~gce-pd/pvc-61b9daf9-2e4e-11e8-9b77-42010af0000e
/dev/sdb       -               -     -     -    - /var/lib/kubelet/pods/61bb679b-2e4e-11e8-9b77-42010af0000e/volumes/kubernetes.io~gce-pd/pvc-61b9daf9-2e4e-11e8-9b77-42010af0000e
/dev/sdb       -               -     -     -    - /home/kubernetes/containerized\_mounter/rootfs/var/lib/kubelet/pods/61bb679b-2e4e-11e8-9b77-42010af0000e/volumes/kubernetes.io~gce-pd/pvc-61b9daf9-2e4e-11e8-9b77-42010af0000e
/dev/sdb       -               -     -     -    - /home/kubernetes/containerized\_mounter/rootfs/var/lib/kubelet/pods/61bb679b-2e4e-11e8-9b77-42010af0000e/volumes/kubernetes.io~gce-pd/pvc-61b9daf9-2e4e-11e8-9b77-42010af0000e

then I went into the docker container and ran df -ahT, I got

Filesystem     Type     Size  Used Avail Use% Mounted on
/dev/sda1      ext4      95G  3.5G   91G   4% /mnt/disks

Why I'm seeing 95G total size instead of 200G, which is the size of my volume?

More info: kubectl describe pod

Name:           xxx-replicaset-0
Namespace:      default
Node:           gke-xxx-cluster-default-pool-5e49501c-nrzt/10.128.0.17
Start Time:     Fri, 23 Mar 2018 11:40:57 -0400
Labels:         app=xxx-replicaset
                controller-revision-hash=xxx-replicaset-755c4f7cff
Annotations:    kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"StatefulSet","namespace":"default","name":"xxx-replicaset","uid":"d6c3511f-2eaf-11e8-b14e-42010af0000...
                kubernetes.io/limit-ranger=LimitRanger plugin set: cpu request for container xxx-deployment
Status:         Running
IP:             10.52.4.5
Created By:     StatefulSet/xxx-replicaset
Controlled By:  StatefulSet/xxx-replicaset
Containers:
  xxx-deployment:
    Container ID:   docker://137b3966a14538233ed394a3d0d1501027966b972d8ad821951f53d9eb908615
    Image:          gcr.io/sampeproject/xxxstaging:v1
    Image ID:       docker-pullable://gcr.io/sampeproject/xxxstaging@sha256:a96835c2597cfae3670a609a69196c6cd3d9cc9f2f0edf5b67d0a4afdd772e0b
    Port:           8080/TCP
    State:          Running
      Started:      Fri, 23 Mar 2018 11:42:17 -0400
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:        100m
    Environment:  
    Mounts:
      /mnt/disks from my-volume (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-hj65g (ro)
Conditions:
  Type           Status
  Initialized    True
  Ready          True
  PodScheduled   True
Volumes:
  my-claim:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  my-claim-xxx-replicaset-0
    ReadOnly:   false
  my-volume:
    Type:    EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
  default-token-hj65g:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-hj65g
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  
Tolerations:     node.alpha.kubernetes.io/notReady:NoExecute for 300s
                 node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason                 Age                From                                                      Message
  ----     ------                 ----               ----                                                      -------
  Warning  FailedScheduling       10m (x4 over 10m)  default-scheduler                                         PersistentVolumeClaim is not bound: "my-claim-xxx-replicaset-0" (repeated 5 times)
  Normal   Scheduled              9m                 default-scheduler                                         Successfully assigned xxx-replicaset-0 to gke-xxx-cluster-default-pool-5e49501c-nrzt
  Normal   SuccessfulMountVolume  9m                 kubelet, gke-xxx-cluster-default-pool-5e49501c-nrzt  MountVolume.SetUp succeeded for volume "my-volume"
  Normal   SuccessfulMountVolume  9m                 kubelet, gke-xxx-cluster-default-pool-5e49501c-nrzt  MountVolume.SetUp succeeded for volume "default-token-hj65g"
  Normal   SuccessfulMountVolume  9m                 kubelet, gke-xxx-cluster-default-pool-5e49501c-nrzt  MountVolume.SetUp succeeded for volume "pvc-902c57c5-2eb0-11e8-b14e-42010af0000e"
  Normal   Pulling                9m                 kubelet, gke-xxx-cluster-default-pool-5e49501c-nrzt  pulling image "gcr.io/sampeproject/xxxstaging:v1"
  Normal   Pulled                 8m                 kubelet, gke-xxx-cluster-default-pool-5e49501c-nrzt  Successfully pulled image "gcr.io/sampeproject/xxxstaging:v1"
  Normal   Created                8m                 kubelet, gke-xxx-cluster-default-pool-5e49501c-nrzt  Created container
  Normal   Started                8m                 kubelet, gke-xxx-cluster-default-pool-5e49501c-nrzt  Started container

Seems like it did not mount the correct volume. I ran lsblk in docker container

NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT

sda       8:0    0  100G  0 disk 
├─sda1    8:1    0 95.9G  0 part /mnt/disks
├─sda2    8:2    0   16M  0 part 
├─sda3    8:3    0    2G  0 part 
├─sda4    8:4    0   16M  0 part 
├─sda5    8:5    0    2G  0 part 
├─sda6    8:6    0  512B  0 part 
├─sda7    8:7    0  512B  0 part 
├─sda8    8:8    0   16M  0 part 
├─sda9    8:9    0  512B  0 part 
├─sda10   8:10   0  512B  0 part 
├─sda11   8:11   0    8M  0 part 
└─sda12   8:12   0   32M  0 part 
sdb       8:16   0  200G  0 disk 

Why this is happening?

-- user3908406
docker
google-cloud-platform
google-kubernetes-engine
kubernetes

2 Answers

4/6/2018

The PVC is not mounted into your container because you did not actually specify the PVC in your container's volumeMounts. Only the emptyDir volume was specified.

I actually recently modified the GKE StatefulSet tutorial. Before, some of the steps were incorrect and saying to manually create the PD and PV objects. Instead, it's been corrected to use dynamic provisioning.

Please try that out and see if the updated steps work for you.

-- Michelle
Source: StackOverflow

3/28/2018

When you use PVCs, K8s manages persistent disks for you.

The exact way how PVs can by defined by provisioner in storage classes. Since you use GKE your default SC uses kubernetes.io/gce-pd provisioner (https://kubernetes.io/docs/concepts/storage/storage-classes/#gce).

In other words for each pod new PV is created.

If you would like to use existing disk you can use Volumes instead of PVCs (https://kubernetes.io/docs/concepts/storage/volumes/#gcepersistentdisk)

-- Maciek Sawicki
Source: StackOverflow