Is there a method to share a file across container with write & read access in kubernetes

2/4/2022

I just started exploring kubernetes concepts. I am using helm chart to do deployment of pod. Got stucked to solve the below problem. Anyone kindly help me to unblock this issue.

I have three containers, lets say A, B and C. container A is having a file => "/root/dir1/sample.txt". Otherwise I can prepare this file offline but needs to be mounted in all three containers. Container A and container C has a service running which will update that file. Container B has a service running which will read that file. so, I need this file to be shared across all the three containers.

Approach 1: Using emptyDir volume, I tried mounting this file but it doesn't helped my case. Because when I use emptyDir I am losing all the files under dir1 which comes via container images. Also I don't want other files under dir1 to be shared across containers. In addition to that I need to mount the file as same directory structure "/root/dir1/sample.txt" instead of creating empty dir say "/root/dir2" as shared-data mount and copying sample.txt to dir2 as "/root/dir2/sample.txt". Reason is now mounting /root/dir2 in all three containers (file write happening in any container reflected in other containers). But this does not helped my case as my need is to mount the file as same directory structure "/root/dir1/sample.txt".

Approach 2: Using configmap volume, I can mount the file "sample.txt" under dir1 as expected. But it is mounted as read-only filesystem and containers A and C are unable to write the file.

So above two approaches does not helped my case. It would be great if anyone help me on how to mount a file directly into containers with write access under same directory structure and shared across containers. Or any volume type available in kubernetes will help my case (https://kubernetes.io/docs/concepts/storage/volumes/#cinder).

Thanks in advance!

-- anonymous user
kubernetes

2 Answers

2/4/2022

This is a read-write-many scenario and there are two ways I know that you can do this.

First, you can use an external mount (such as an NFS server) then mount that into your pod. If you don't have an NFS server of your own, you can use a managed NFS service such Cloud Firestore on Google or EFS on AWS (note that this will incur a cost)

Second, you can install an NFS provisioner into the cluster such as this one which will generate a new StorageClass that you can use to generate a PersistentVolume, that is read-write-many capable and you can share that around your pods. This, while a cluster-native option, does have its own issues -- if the provisioner pod crashes or is evicted, the pods depending on the NFS mount will hang and then crash also, so YMMV.

-- Blender Fox
Source: StackOverflow

2/4/2022

I think your best option is to use an emptyDir and an initContainer that mounts it to another path and copies the original files into it. Its not an uncommon pattern. See it in action in rabbitmq cluster operator.

That would look something like this in the pod:

(...)
spec:
  volumes:
    - name: shared-dir
      emptyDir: {}
  initContainers:
    - name: prepare-dir
      image: YOUR_IMAGE
      command:
        - sh
        - '-c'
        - 'cp /root/dir1 /tmp/dir1'
      volumeMounts:
        - name: shared-dir
          mountPath: /tmp/dir1/
  containers:
    - name: container-a
      image: YOUR_IMAGE
      volumeMounts:
        - name: shared-dir
          mountPath: /root/dir1/
    - name: container-b
      image: YOUR_IMAGE
      volumeMounts:
        - name: shared-dir
          mountPath: /root/dir1/
(...)
-- chicocvenancio
Source: StackOverflow