I have an EKS cluster of 2 nodes which I created by the below command
time eksctl create cluster --name=eks-spinnaker --ssh-access=true --ssh-public-key=testkeyG --nodes=2 --region=ap-southeast-2 --write-kubeconfig=false
I then ssh to the two newly created VM and added, EFS as below so I can use it for PersistenceVolume
sudo yum update -y
EFS_FILE_SYSTEM_DNS_NAME=fs-efb24ad7.efs.ap-southeast-2.amazonaws.com
sudo mkdir /efs-data
sudo mount -t nfs -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport $EFS_FILE_SYSTEM_DNS_NAME:/ /efs-data
df -Ph
cd /efs-data/
chmod -R 777 .
touch abc
I see the below on both nodes clearly: Node01
[ec2-user@ip-192-168-74-31 efs-data]$ df -Ph |grep -i efs
fs-efb24ad7.efs.ap-southeast-2.amazonaws.com:/ 8.0E 0 8.0E 0% /efs-data
[ec2-user@ip-192-168-74-31 efs-data]$
Node02
[ec2-user@ip-192-168-21-167 efs-data]$ df -Ph |grep -i efs
fs-efb24ad7.efs.ap-southeast-2.amazonaws.com:/ 8.0E 0 8.0E 0% /efs-data
[ec2-user@ip-192-168-21-167 efs-data]$
I am now spinning up my Jenkins as k8s deployment and to use that /efs-data as PV but having no luck.
[centos@ip-10-0-0-61 storage]$ kubectl get pv,pvc -n jenkins
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/efs-pv 5Gi RWX Retain Bound jenkins/efs-claim efs-sc 7h6m
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/efs-claim Bound efs-pv 5Gi RWX efs-sc 7h6m
[centos@ip-10-0-0-61 storage]$
Added helm repo and installed it
helm repo add jenkinsci https://charts.jenkins.io
helm install jenkins jenkinsci/jenkins --set rbac.create=true,master.servicePort=8081,master.serviceType=LoadBalancer,persistence.existingClaim=efs-claim -n jenkins
Unfortunately, the kubectl describe pods <od name> -n jenkins
gives no clue as to what is going wrong
kubectl describe pods jenkins-c7498bcdf-4jk9w -n jenkins
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedMount 26m (x6 over 117m) kubelet, ip-192-168-21-167.ap-southeast-2.compute.internal Unable to attach or mount volumes: unmounted volumes=[jenkins-home], unattached volumes=[plugin-dir jenkins-token-xgzf4 sc-config-volume tmp jenkins-home jenkins-config plugins]: timed out waiting for the condition
Warning FailedMount 17m (x8 over 110m) kubelet, ip-192-168-21-167.ap-southeast-2.compute.internal Unable to attach or mount volumes: unmounted volumes=[jenkins-home], unattached volumes=[plugins plugin-dir jenkins-token-xgzf4 sc-config-volume tmp jenkins-home jenkins-config]: timed out waiting for the condition
Warning FailedMount 6m12s (x11 over 128m) kubelet, ip-192-168-21-167.ap-southeast-2.compute.internal Unable to attach or mount volumes: unmounted volumes=[jenkins-home], unattached volumes=[tmp jenkins-home jenkins-config plugins plugin-dir jenkins-token-xgzf4 sc-config-volume]: timed out waiting for the condition
Warning FailedMount 110s (x72 over 132m) kubelet, ip-192-168-21-167.ap-southeast-2.compute.internal MountVolume.SetUp failed for volume "efs-pv" : kubernetes.io/csi: mounter.SetupAt failed: rpc error: code = Internal desc = Could not mount "fs-efb24ad7:/efs-data" at "/var/lib/kubelet/pods/66e53953-8678-404c-beb6-d21908cc8dee/volumes/kubernetes.io~csi/efs-pv/mount": mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t efs fs-efb24ad7:/efs-data /var/lib/kubelet/pods/66e53953-8678-404c-beb6-d21908cc8dee/volumes/kubernetes.io~csi/efs-pv/mount
Output: mount.nfs4: mounting fs-efb24ad7.efs.ap-southeast-2.amazonaws.com:/efs-data failed, reason given by server: No such file or directory
So, I ssh on one of the EKS clusters where I have mounted EFS, there also I get the same errors
Sep 5 16:19:17 ip-192-168-21-167 kubelet: {"level":"info","ts":"2020-09-05T16:19:17.390Z","caller":"/usr/local/go/src/runtime/proc.go:203","msg":"CNI Plugin version: v1.6.3 ..."}
Sep 5 16:19:22 ip-192-168-21-167 kubelet: {"level":"info","ts":"2020-09-05T16:19:22.415Z","caller":"/usr/local/go/src/runtime/proc.go:203","msg":"CNI Plugin version: v1.6.3 ..."}
Sep 5 16:19:22 ip-192-168-21-167 su: (to root) ec2-user on pts/0
Sep 5 16:19:27 ip-192-168-21-167 kubelet: {"level":"info","ts":"2020-09-05T16:19:27.440Z","caller":"/usr/local/go/src/runtime/proc.go:203","msg":"CNI Plugin version: v1.6.3 ..."}
Sep 5 16:19:27 ip-192-168-21-167 kubelet: I0905 16:19:27.732162 3861 csi_attacher.go:310] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice...
Sep 5 16:19:27 ip-192-168-21-167 kubelet: I0905 16:19:27.732616 3861 operation_generator.go:587] MountVolume.MountDevice succeeded for volume "efs-pv" (UniqueName: "kubernetes.io/csi/efs.csi.aws.com^fs-efb24ad7:/efs-data") pod "jenkins-c7498bcdf-4jk9w" (UID: "66e53953-8678-404c-beb6-d21908cc8dee") device mount path "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/efs-pv/globalmount"
Sep 5 16:19:29 ip-192-168-21-167 kubelet: E0905 16:19:29.715185 3861 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/csi/efs.csi.aws.com^fs-efb24ad7:/efs-data podName: nodeName:}" failed. No retries permitted until 2020-09-05 16:21:31.715151408 +0000 UTC m=+86869.996160364 (durationBeforeRetry 2m2s). Error: "MountVolume.SetUp failed for volume \"efs-pv\" (UniqueName: \"kubernetes.io/csi/efs.csi.aws.com^fs-efb24ad7:/efs-data\") pod \"jenkins-c7498bcdf-4jk9w\" (UID: \"66e53953-8678-404c-beb6-d21908cc8dee\") : kubernetes.io/csi: mounter.SetupAt failed: rpc error: code = Internal desc = Could not mount \"fs-efb24ad7:/efs-data\" at \"/var/lib/kubelet/pods/66e53953-8678-404c-beb6-d21908cc8dee/volumes/kubernetes.io~csi/efs-pv/mount\": mount failed: exit status 32\nMounting command: mount\nMounting arguments: -t efs fs-efb24ad7:/efs-data /var/lib/kubelet/pods/66e53953-8678-404c-beb6-d21908cc8dee/volumes/kubernetes.io~csi/efs-pv/mount\nOutput: mount.nfs4: mounting fs-efb24ad7.efs.ap-southeast-2.amazonaws.com:/efs-data failed, reason given by server: No such file or directory\n"
Sep 5 16:19:32 ip-192-168-21-167 kubelet: {"level":"info","ts":"2020-09-05T16:19:32.458Z","caller":"/usr/local/go/src/runtime/proc.go:203","msg":"CNI Plugin version: v1.6.3 ..."}
Sep 5 16:19:35 ip-192-168-21-167 dhclient[2857]: XMT: Solicit on eth0, interval 111090ms.
Sep 5 16:19:37 ip-192-168-21-167 kubelet: {"level":"info","ts":"2020-09-05T16:19:37.477Z","caller":"/usr/local/go/src/runtime/proc.go:203","msg":"CNI Plugin version: v1.6.3 ..."}
The permissions on /efs-data are below:
[root@ip-192-168-21-167 ~]# ls -ld /efs-data/
drwxrwxrwx 4 root root 6144 Sep 5 13:19 /efs-data/
[root@ip-192-168-21-167 ~]#
Exhausted... Please let me know what the issue is and how to have it resolved...
The issue is that /efs-data
doesn't eactually exist in your EFS drive. Jenkins is trying to mount that directory (from the log outout):
-t efs fs-efb24ad7:/efs-data /var/lib/kubelet/pods/66e53953-8678-404c-beb6-d21908cc8dee/volumes/kubernetes.io~csi/efs-pv/mount
So the message:
Output: mount.nfs4: mounting fs-efb24ad7.efs.ap-southeast-2.amazonaws.com:/efs-data failed, reason given by server: No such file or directory
is correct.
When you run:
sudo mount -t nfs -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport $EFS_FILE_SYSTEM_DNS_NAME:/ /efs-data
it actually mounts /
on your local /efs-data
directory, but that's not the EFS directory. For that to exist you can simply run:
cd /efs-data
mkdir efs-data
Then /efs-data
will actually exist in the EFS volume.
✌️