same data volume attached to multiple container even in different host

5/12/2018

I'm able to bind a docker volume to a specific container in a swarm thanks to flocker, but now i would have multiple replicas of my server (to do load balancing) and so i'm searching something to bind the same data volume to multiple replicas of a docker service. In flocker documentaiton i have found that

Can more than one container access the same volume? Flocker works by creating a 1 to 1 relationship of a volume and a container. This means you can have multiple volumes for one container, and those volumes will always follow that container.

Flocker attaches volumes to the individual agent host (docker host) and this can only be one host at a time because Flocker attaches Block-based storage. Nodes on different hosts cannot access the same volume, because it can only be attached to one node at a time.

If multiple containers on the same host want to use the same volume, they can, but be careful because multiple containers accessing the same storage volume can cause corruption.

Can I attach a single volume to multiple hosts? Not currently, support from multi-attach backends like GCE in Read Only mode, or NFS-like backends like storage, or distributed filesystems like GlusterFS would need to be integrated. Flocker focuses mainly on block-storage uses cases that attach a volume to a single node at a time.

So i think is no possible to do what i want with flocker. I could use a different orchestrator (k8s) if that could help me, even if i have no experience with that.

I would not use NAS/NFS or anything distribuited filesystems.

Any suggestions?

Thanks in advance.

-- Antonio Caristia
docker
docker-swarm
flocker
kubernetes

1 Answer

5/15/2018

In k8s, you can mount volume to different Pods at the same time if technology that backs the volume supports shared access.

As mentioned in Kubernetes Persistent Volumes:

Access Modes A PersistentVolume can be mounted on a host in any way supported by the resource provider. As shown below, providers will have different capabilities and each PV’s access modes are set to the specific modes supported by that particular volume. For example, NFS can support multiple read/write clients, but a specific NFS PV might be exported on the server as read-only. Each PV gets its own set of access modes describing that specific PV’s capabilities.

The access modes are:

  • ReadWriteOnce – the volume can be mounted as read-write by a single node
  • ReadOnlyMany – the volume can be mounted read-only by many nodes
  • ReadWriteMany – the volume can be mounted as read-write by many nodes

Types of volumes that supports ReadOnlyMany mode:

  • AzureFile
  • CephFS
  • FC
  • FlexVolume
  • GCEPersistentDisk
  • Glusterfs
  • iSCSI
  • Quobyte
  • NFS
  • RBD
  • ScaleIO

Types of volumes that supports ReadWriteMany mode:

  • AzureFile
  • CephFS
  • Glusterfs
  • Quobyte
  • RBD
  • PortworxVolume
  • VsphereVolume(works when pods are collocated)
-- VAS
Source: StackOverflow