I installed the EFS CI driver and got their Static Provisioning example to work: I was able to start a pod that appended to a file on the EFS volume. I could delete the pod and start another one to inspect that file and confirm the data written by the first pod was still there. But what I actually need to do is mount the volume read-only, and I am having no luck there.
Note that after I successfully ran that example, I launched an EC2 instance and in it, I mounted the EFS filesystem, then added the data that my pods need to access in a read-only fashion. Then I unmounted the EFS filesystem and terminated the instance.
Using the configuration below, which is based on the Static Provisioning example referenced above, my pod does not start Running
; it remains in ContainerCreating
.
Storage class:
$ kubectl get sc efs-sc -o yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"name":"efs-sc"},"provisioner":"efs.csi.aws.com"}
creationTimestamp: "2020-01-12T05:36:13Z"
name: efs-sc
resourceVersion: "809880"
selfLink: /apis/storage.k8s.io/v1/storageclasses/efs-sc
uid: 71ecce62-34fd-11ea-8a5f-124f4ee64e8d
provisioner: efs.csi.aws.com
reclaimPolicy: Delete
volumeBindingMode: Immediate
Persistent Volume (this is the only PV in the cluster that uses the EFS Storage Class):
$ kubectl get pv efs-pv-ro -o yaml
apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"PersistentVolume","metadata":{"annotations":{},"name":"efs-pv-ro"},"spec":{"accessModes":["ReadOnlyMany"],"capacity":{"storage":"5Gi"},"csi":{"driver":"efs.csi.aws.com","volumeHandle":"fs-26120da7"},"persistentVolumeReclaimPolicy":"Retain","storageClassName":"efs-sc","volumeMode":"Filesystem"}}
pv.kubernetes.io/bound-by-controller: "yes"
creationTimestamp: "2020-01-12T05:36:59Z"
finalizers:
- kubernetes.io/pv-protection
name: efs-pv-ro
resourceVersion: "810231"
selfLink: /api/v1/persistentvolumes/efs-pv-ro
uid: 8d54a80e-34fd-11ea-8a5f-124f4ee64e8d
spec:
accessModes:
- ReadOnlyMany
capacity:
storage: 5Gi
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: efs-claim-ro
namespace: default
resourceVersion: "810229"
uid: e0498cae-34fd-11ea-8a5f-124f4ee64e8d
csi:
driver: efs.csi.aws.com
volumeHandle: fs-26120da7
persistentVolumeReclaimPolicy: Retain
storageClassName: efs-sc
volumeMode: Filesystem
status:
phase: Bound
Persistent Volume Claim (this is the only PVC in the cluster attempting to use the EFS storage class:
$ kubectl get pvc efs-claim-ro -o yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"efs-claim-ro","namespace":"default"},"spec":{"accessModes":["ReadOnlyMany"],"resources":{"requests":{"storage":"5Gi"}},"storageClassName":"efs-sc"}}
pv.kubernetes.io/bind-completed: "yes"
pv.kubernetes.io/bound-by-controller: "yes"
creationTimestamp: "2020-01-12T05:39:18Z"
finalizers:
- kubernetes.io/pvc-protection
name: efs-claim-ro
namespace: default
resourceVersion: "810234"
selfLink: /api/v1/namespaces/default/persistentvolumeclaims/efs-claim-ro
uid: e0498cae-34fd-11ea-8a5f-124f4ee64e8d
spec:
accessModes:
- ReadOnlyMany
resources:
requests:
storage: 5Gi
storageClassName: efs-sc
volumeMode: Filesystem
volumeName: efs-pv-ro
status:
accessModes:
- ReadOnlyMany
capacity:
storage: 5Gi
phase: Bound
And here is the Pod. It remains in ContainerCreating
and does not switch to Running
:
$ kubectl get pod efs-app -o yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"name":"efs-app","namespace":"default"},"spec":{"containers":[{"args":["infinity"],"command":["sleep"],"image":"centos","name":"app","volumeMounts":[{"mountPath":"/data","name":"persistent-storage","subPath":"mmad"}]}],"volumes":[{"name":"persistent-storage","persistentVolumeClaim":{"claimName":"efs-claim-ro"}}]}}
kubernetes.io/psp: eks.privileged
creationTimestamp: "2020-01-12T06:07:08Z"
name: efs-app
namespace: default
resourceVersion: "813420"
selfLink: /api/v1/namespaces/default/pods/efs-app
uid: c3b8421b-3501-11ea-b164-0a9483e894ed
spec:
containers:
- args:
- infinity
command:
- sleep
image: centos
imagePullPolicy: Always
name: app
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /data
name: persistent-storage
subPath: mmad
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-z97dh
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: ip-192-168-254-51.ec2.internal
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: persistent-storage
persistentVolumeClaim:
claimName: efs-claim-ro
- name: default-token-z97dh
secret:
defaultMode: 420
secretName: default-token-z97dh
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2020-01-12T06:07:08Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2020-01-12T06:07:08Z"
message: 'containers with unready status: [app]'
reason: ContainersNotReady
status: "False"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2020-01-12T06:07:08Z"
message: 'containers with unready status: [app]'
reason: ContainersNotReady
status: "False"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2020-01-12T06:07:08Z"
status: "True"
type: PodScheduled
containerStatuses:
- image: centos
imageID: ""
lastState: {}
name: app
ready: false
restartCount: 0
state:
waiting:
reason: ContainerCreating
hostIP: 192.168.254.51
phase: Pending
qosClass: BestEffort
startTime: "2020-01-12T06:07:08Z"
I am not sure if subPath
will work with this configuration or not, but the same problem happens whether or not subPath
is in the Pod configuration.
The problem does seem to be with the volume. If I comment out the volumes
and volumeMounts
section, the pod runs.
It seems that the PVC has bound with the correct PV, but the pod is not starting. I'm not seeing a clue in any of the output above, but maybe I'm missing something?
Kubernetes version:
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.8", GitCommit:"211047e9a1922595eaa3a1127ed365e9299a6c23", GitTreeState:"clean", BuildDate:"2019-10-15T12:11:03Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.9-eks-c0eccc", GitCommit:"c0eccca51d7500bb03b2f163dd8d534ffeb2f7a2", GitTreeState:"clean", BuildDate:"2019-12-22T23:14:11Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}
aws-efs-csi-driver version: v.0.2.0.
Note that one of requirements is to have installed Golang in version 1.13.4+ but you have go1.12.12. So you have to update it. If you are upgrading from an older version of Go you must first remove the existing version. Take a look here: upgrading-golang.
This driver is supported on Kubernetes version 1.14 and later Amazon EKS clusters and worker nodes. Alpha features of the Amazon EFS CSI Driver are not supported on Amazon EKS clusters. Cannot mount read-only volume in kubernetes pod (using EFS CSI driver in AWS EKS). Try to change access mode to:
accessModes:
- ReadWriteMany
You can find more information here: efs-csi-driver.
Make sure that while creating EFS filesystem, it is accessible from Kuberenetes cluster. This can be achieved by creating the filesystem inside the same VPC as Kubernetes cluster or using VPC peering.
Static provisioning - EFS filesystem needs to be created manually first, then it could be mounted inside container as a persistent volume (PV) using the driver. Mount Options - Mount options can be specified in the persistence volume (PV) to define how the volume should be mounted. Aside from normal mount options, you can also specify tls as a mount option to enable encryption in transit of the EFS filesystem.
Because Amazon EFS is an elastic file system, it does not enforce any file system capacity limits. The actual storage capacity value in persistent volumes and persistent volume claims is not used when creating the file system. However, since storage capacity is a required field in Kubernetes, you must specify a valid value, such as, 5Gi in this example. This value does not limit the size of your Amazon EFS file system