I have deployed my services in using docker swarm orchestration, for which I am using 3 GCP VMs 1 as manager node and 2 as worker nodes. Now there is requirement of sharing data across the nodes. On doing some research over it found that, a possible solution can be to mount a NFS drive on each VM and put the shareable data on it. For doing this I need a NFS share on GCP itself, so my question here is that how can I get a NFS share drive. Can I convert disk from one of the existing VMs to NFS share or I will be needing a new shareable storage and if I have to take new storage which storage should I take (Bucket or FileStore) and how will again make them as NFS shareable. I am just trying to explore each and every possibility, please recommend what all possible solution you have, I will try to implement them and get to the best possible solution.
Is is possible to achieve this without adding a new nfs drive. Maybe using Kubernetes's feature of Dynamic Volume Provisioning
or by any other means?
At this point Google Cloud Filestore seems your best bet. On the product page they have the straight forward instructions:
Simple commands to create a Filestore instance with gcloud.
gcloud filestore instances create nfs-server \ --project=[PROJECT_ID] \ --zone=us-central1-c \ --tier=STANDARD \ --file-share=name="vol1",capacity=1TB \ --network=name="default"
Simple commands to install NFS, mount your file share, and set access permissions.
sudo apt-get -y update sudo apt-get -y install nfs-common sudo mkdir /mnt/test sudo mount 10.0.0.2:/vol1 /mnt/test sudo chmod go+rw /mnt/test
The part "convert disk from one of the existing VMs to NFS share" is very confusing, though. Given you're trying to have a common share between machines.