How to mount Google Bucket as local disk on Linux instance with full access rights

3/6/2017

Using five lines below install gcsfuse on a brand new Ubuntu14 instance:

export GCSFUSE_REPO=gcsfuse-`lsb_release -c -s`
echo "deb http://packages.cloud.google.com/apt $GCSFUSE_REPO main" | sudo tee /etc/apt/sources.list.d/gcsfuse.list
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -

sudo apt-get update
sudo apt-get install gcsfuse

Now create a folder on a local disk (this folder is to be used as the mounting point for Google Bucket). Give this folder a full access:

sudo mkdir /home/shared 
sudo chmod 777 /home/shared 

Using gcsfuse command mount Google bucket to the mounting-point folder we created earlier. But first, list the names of the Google Buckets that are linked to your Google Project:

gsutil ls

The Google Project I work on has a single bucket named "my_bucket". Knowing a bucket name I can run gcsfuse command that will mount my_bucket Bucket on to a local /home/shared mounting-folder:

gcsfuse my_bucket /home/shared 

The execution of this command logs that it was successful:

Using mount point: /home/shared
Opening GCS connection...
Opening bucket...
Mounting file system...
File system has been successfully mounted.

But now when I try to create another folder inside of mapped /home/shared mounting-point folder I get the error message:

mkdir /home/shared/test

Error:

mkdir: cannot create directory ‘/home/shared/test’: Input/output error

Trying to fix the problem I successfully un-mount it using:

fusermount -u /home/shared

and map it back but now using another gcsfuse command-line:

mount -t gcsfuse -o rw,user  my_bucket /home/shared

But it results to the exactly same permission issue.

At very last I have made an attempt to fix this permission issue by editing /etc/fstab configuration file with:

sudo nano /etc/fstab

and then appending a line to the end of the file:

my_bucket /home/shared gcsfuse rw,noauto,user

but it did not help to solve this issue.

What needs to be changed to allow all the users a full access to the mapped Google Bucket so the users are able to create, delete and modify the files and folders stored on Google Bucket?

-- alphanumeric
gcsfuse
google-compute-engine
google-kubernetes-engine
linux
ubuntu

2 Answers

4/20/2017

I saw your question because I was having exactly the same problem and I also did the same steps as you. The solution to give user root full control of the mounted cloud folder :

You have to go to your Google Cloude place, search for "Service account" and clic on it. enter image description here

Then you have to export the key file of your service account (.json) (I have created a new service account with the Google Cloud Shell consol using this command : gcloud auth application-default login And then followed the steps when I was prompted by the shell.)

Clic on Create Key and choose JSON enter image description here Upload the .JSON keyfile to your linux server. Then on your Linux server, run this command : gcsfuse -o allow_other --gid 0 --uid 0 --file-mode 777 --dir-mode 777 --key-file /path_to_your_keyFile_that_you_just_uploaded.json nameOfYourBucket /path/to/mount

To find your root user GID & UID, login in root to your server and in terminal type : id -u root for UID & id -g root for GID

Hope I helped, because I've been struggling for long and no resource on internet really helped. Cheers.

-- Keytrap
Source: StackOverflow

9/24/2019

The answer keytrap gave is a correct one. But since 2017, gcsfuse as well as GCP have evolved and there are some more (maybe easier) options to let gscfuse connect with a Google account:

  1. If you are running on a Google Compute Engine instance with scope storage-full configured, then Cloud Storage FUSE can use the Compute Engine built-in service account.
  2. If you installed the Google Cloud SDK and ran gcloud auth application-default login, then Cloud Storage FUSE can use these credentials.
  3. If you set the environment variable GOOGLE_APPLICATION_CREDENTIALS to the path of a service account's JSON key file, then Cloud Storage FUSE will use this credential.

Source: Cloud Storage FUSE

-- Nebulastic
Source: StackOverflow