According to official K8S
documentation:
The access modes are:
ReadWriteOnce
– the volume can be mounted as read-write by a single node
ReadOnlyMany
– the volume can be mounted read-only by many nodes
ReadWriteMany
– the volume can be mounted as read-write by many nodes
I've created one Persistent Volume with RWO
access mode. I've applied PVC:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: readwriteonce-test
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
storageClassName: ""
and Deployment:
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: test-deployment
spec:
selector:
matchLabels:
app: test-deployment
replicas: 3
template:
metadata:
labels:
app: test-deployment
spec:
containers:
- name: test-pod
image: "gcr.io/google_containers/busybox:1.24"
command:
- "/bin/sh"
args:
- "-c"
- "rm -R /data/*; while :; do ls /data/; name=$(date '+%s'); echo \"some data in file ${name}\" >> \"/data/${name}.txt\" ; sleep 10; cat \"/data/${name}.txt\"; done"
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /data
name: test-volume
restartPolicy: Always
volumes:
- name: test-volume
persistentVolumeClaim:
claimName: readwriteonce-test
PeristentVolume:
Name: readwriteonce-test
Labels: volume-name=readwriteonce-test
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"PersistentVolume","metadata":{"annotations":{},"labels":{"volume-name":"readwriteonce-test"},"name":"readwriteo...
Finalizers: [kubernetes.io/pv-protection]
StorageClass:
Status: Bound
Claim: ***/readwriteonce-test
Reclaim Policy: Retain
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 8Gi
Node Affinity: <none>
Message:
Source:
Type: NFS (an NFS mount that lasts the lifetime of a pod)
Server: ***.efs.eu-west-1.amazonaws.com
Path: /readwriteonce-test
ReadOnly: false
Events: <none>
Could anyone explain me, why there is no error in such configuration? As you can see, each pod have been mounted on different nodes. Each pod is able to see files created by other pods.
I can only confirm, that I share the same observations as you on my test EKS cluster when using EFS based Persistent Volumes, dynamically created with PVC request spec: accessModes: ReadOnlyMany
.
As seen below I can write simultaneously to the same file from two different Pods, scheduled on different Nodes:
Hello from test-deployment-6f954f9f67-ljghs at 1583239308 on ip-192-168-68-xyz.us-west-2.compute.internal node
Hello from test-deployment-6f954f9f67-bl99s at 1583239308 on ip-192-168-49-abc.us-west-2.compute.internal node
Instead, I would rather expect the similar behavior* like in case of other PV types supporting all types of accessModes (RWO, RWX, ROX):
Warning FailedAttachVolume 103s attachdetach-controller Multi-Attach error for volume "pvc-badb4724-5d5a-11ea-8395-42010aa80131"
Multi-Attach error for volume "pvc-badb4724-5d5a-11ea-8395-42010aa80131" Volume is already used by pod(s) test-deployment-xyz-...
*occurs while scheduler is trying to schedule a second replica of the Pod using the same PV.
I think it lays in the nature of NFS based storage, which is the underlying storage type for EFS-provisioner.
Seems like we are not the only ones having issue with understanding the official documentation in that matter, please check these open github issues: #18714, #60903.