Cannot use existing persistentVolumes that already used by another nodes in Kubernetes Google Compute Platform

7/3/2018

I tried to remain on the free-tier of google cloud platform and it only permits 3 nodes and 30Gb of Storage in which where the cluster created, each nodes are mapped to each storage 10Gb each.

And when I tried to mount persistentVolume and Claims to existing Disks, the error shows : Attach failed for volume "myapp-pv" : googleapi: Error 400: The disk resource 'projects/myapp-dev/zones/us-central1-a/disks/gke-myapp-dev-clus-default-pool-64e30c4b-dvkc' is already being used by 'projects/myapp-dev/zones/us-central1-a/instances/gke-myapp-dev-clus-default-pool-64e30c4b-dvkc The working sollution is for me to create another disks, but the problem is it is out of the free-tier, I wonder how can we stay in free-tier without creating another persistentDisk in GCP ?

-- Bilal Bayasut
google-compute-engine
kubernetes

1 Answer

7/3/2018

And when I tried to mount persistentVolume and Claims to existing Disks, the error shows

This error is happening because of this constraint of PV on GCE:

Important! A volume can only be mounted using one access mode at a time,
even if it supports many. For example, a GCEPersistentDisk can be mounted as ReadWriteOnce
by a single node or ReadOnlyMany by many nodes, but not at the same time.

Table given in above link shown that GCEPersistentDisk can't be mounted as ReadWriteMany so if you need to connect it in that way you have to use some other volume plugin.

I wonder how can we stay in free-tier without creating another persistentDisk in GCP?

Just some thouhgts... With free-tier you are limited in a number of nodes and disk space available:

  • You can always 'simulate' ReadWriteMany with NFS volume plugin for example (installing your own provisioner for NFS) providing that your use case is not excluding NFS usage. Dowside is that you need to install NFS provisioner (squeeze it in you capacity) and it is not really well suited for fast io (database and stuff)
  • You can use hostPath on each of the nodes and manually juggle pods around, but that is prone to data loss and not really a proper kubernetes approach to PV handling. This is something to consider if you need fast io (you are testing with databases) and proper backup should be in place to avoid data loss if node dies.
-- Const
Source: StackOverflow