I have an NFS helm chart. It is one of the charts for an application that has 5 more sub-charts. 2 of the charts have a shared storage which I am using NFS. In GCP when I provide NFS service name in the PV it works.
apiVersion: v1
kind: PersistentVolume
metadata:
name: {{ include "nfs.name" . }}
spec:
capacity:
storage: {{ .Values.persistence.nfsVolumes.size }}
accessModes:
- {{ .Values.persistence.nfsVolumes.accessModes }}
mountOptions:
- nfsvers=4.1
nfs:
server: nfs.default.svc.cluster.local # nfs is from svc {{ include "nfs.name" .}}
path: "/opt/shared-shibboleth-idp"
But the same doesn't work on AWS EKS. The error there - on AWS EKS - is connection timeout so it can't mount the volume. When I change the server to
server: a4eab2d4aef2311e9a2880227e884517-1524131093.us-west-2.elb.amazonaws.com
.
I get connection timed out.
All the mounts are okay since it works well with GCP. What am I doing wrong?