Is it possible to mount disk to gke pod and compute engine

10/8/2020

Is it possible to mount disk to gke pod and compute engine at the same time.

I have a ubunut disk of 10 gb

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-demo
spec:
  capacity:
    storage: 10G
  accessModes:
    - ReadWriteOnce
  claimRef:
    name: pv-claim-demo
  gcePersistentDisk:
    pdName: pv-test1

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pv-claim-demo
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10G

deploment.yaml

spec:
  containers:
    - image: wordpress
      name: wordpress
      ports:
        - containerPort: 80
          name: wordpress
      volumeMounts:
        - name: wordpress-persistent-storage
          mountPath: /app/logs
  volumes:
    - name: wordpress-persistent-storage
      persistentVolumeClaim:
        claimName: pv-claim-demo

The idea is to mount the logs files generated by pod to disk and access it from compute engine. I cannot use NFS or hostpath to solve the problem. The other challenge is multiple pod will be writting to same pv.

-- pythonhmmm
google-cloud-platform
google-kubernetes-engine
kubernetes

3 Answers

10/8/2020

You can't write many on Persistent disk. If you set your disk in read only, many can read on it (but not write, don't match your use case).

The only solution for this is to use NFS compliant storage. On Google Cloud, it's filestore service. It's exactly designed for your use case and you have tutorial for GKE

-- guillaume blaquiere
Source: StackOverflow

10/8/2020

Better use the Google Cloud's operations suite for GKE (formerly known as StackDriver).

There would be two API, which can be used to access from GCE:

-- Martin Zeitler
Source: StackOverflow

10/8/2020

The other challenge is multiple pod will be writing to same PV.

Yes, this does not work well, unless you have a storage class similar to NFS. The default storageClass in Google Kubernetes Engine only support access mode ReadWriteOnce when dynamically provisioned - so only one replica can mount it.

The idea is to mount the logs files generated by pod to disk and access it from compute engine.

This is not a recommended solution for logs when using Kubernetes. An app on Kubernetes should follow the 12 factor principles, and for this problem there is a specific item about logs - the app should log to stdout. For apps that does not follow the 12 factor principles, this can be solved by a sidecar that tails the log files and then print them on stdout.

Logs that are printed to stdout is typically forwarded by the platform to a log collection system - as a service. So this is not anything the app developer need to be responsible for.

For how logs is handled by the platform in Google Kubernetes Engine, see Google Cloud Operations suite for GKE

-- Jonas
Source: StackOverflow