After creating the AWS EFS file system I went ahead and mapped it to one of the deployments/container's Volume as /data/files
directory:
volumeMounts:
- name: efs-persistent-storage
mountPath: /data/files
readOnly: false
volumes:
- name: efs-persistent-storage
nfs:
server: fs-1234.efs.us-west-2.amazonaws.com
path: /files
I now am able to delete, create and modify the files stored on EFS drive. But running .sh script that tries to copy the files fails telling that the permissions of the /data/files
directory don't allow it to create the files. I double checked the directory permissions. And they are all open. How could I make it work?
May be the problem is that I am mapping directly to the efs server fs-1234.efs.us-west-2.amazonaws.com
? Would it give me more options if I would use Persistant Volume Claim instead?