k8s fails scheduling of local ssd volume on GCP

9/6/2018

I'm trying to specify Local SSD in a Google Cloud as a PersistedVolume. I followed the docs to set up the automated SSD provisioning, and running kubectl get pv returns a valid volume:

NAME                CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM     STORAGECLASS   REASON    AGE
local-pv-9721c951   368Gi      RWO            Delete           Available             local-scsi               1h

The problem is that I cannot get my pod to bind to it. The kubectl get pvc keeps showing this:

NAME      STATUS    VOLUME    CAPACITY   ACCESS MODES   STORAGECLASS   AGE
mapdata   Pending                                       local-scsi     7m

and kubectl get events gives me these:

LAST SEEN   FIRST SEEN   COUNT     NAME                                                KIND                    SUBOBJECT   TYPE      REASON                 SOURCE                        MESSAGE
7m          7m           1         v3tiles.1551c0bbcb23d983                            Service                             Normal    EnsuredLoadBalancer    service-controller            Ensured load balancer
2m          8m           24        maptilesbackend-8645566545-x44nl.1551c0ae27d06fca   Pod                                 Warning   FailedScheduling       default-scheduler             0/1 nodes are available: 1 node(s) didn't find available persistent volumes to bind.
2m          8m           26        mapdata.1551c0adf908e362                            PersistentVolumeClaim               Normal    WaitForFirstConsumer   persistentvolume-controller   waiting for first consumer to be created before binding

What would i need to do to bind that SSD to my pod? Here's the code I have been experimenting with:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: maptilesbackend
  namespace: default
spec:
  selector:
    matchLabels:
      app: maptilesbackend
  strategy:
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: maptilesbackend
    spec:
      containers:
      - image: klokantech/openmaptiles-server
        imagePullPolicy: Always
        name: maptilesbackend
        volumeMounts:
          - mountPath: /data
            name: mapdata
            readOnly: true
      volumes:
        - name: mapdata
          persistentVolumeClaim:
            claimName: mapdata
            readOnly: true
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: "local-scsi"
provisioner: "kubernetes.io/no-provisioner"
volumeBindingMode: "WaitForFirstConsumer"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mapdata
spec:
  storageClassName: local-scsi
  accessModes:
    - ReadOnlyMany
  resources:
    requests:
      storage: 300Gi
-- Yurik
google-kubernetes-engine
kubernetes

2 Answers

9/6/2018

It turns out that accessMode: ReadOnlyMany does not work in this case. Not sure how to make it work... will post if I find more information.

-- Yurik
Source: StackOverflow

9/7/2018

ReadOnlyMany doesn't make sense for local SSDs

As per the docs:

ReadOnlyMany – the volume can be mounted read-only by many nodes

You can't mount a local SSD on many nodes because it's local to one node only.

-- Rico
Source: StackOverflow