Is NFS hard- or soft-mounted when making a Pod in Kubernetes with an NFS-volume?
As I understand this might have an impact on how it handles a timeout?
Example yaml:
apiVersion: v1
kind: Pod
metadata:
name: nfs-web
spec:
containers:
- name: web
image: nginx
ports:
- name: web
containerPort: 80
volumeMounts:
- name: nfs
mountPath: "/usr/share/nginx/html"
volumes:
- name: nfs
nfs:
server: nfs-server.default.kube.local
path: "/"
I believe that the NFS mounts inside of a POD will use the defaults provided by the implementation of NFS in the container OS. I can't be 100% certain (I'm not deeply familiar with the code), but in my experience the NFS mounts are mounted with the hard
option, which is default in most implementations of NFS (see man nfs
for more details on your OS; soft
is often considered dangerous.)
The NFSVolumeSource struct doesn't appear to have the ability to know about mount settings (except read-only) and I don't see any hard-coded options in the NFS volume code.
You can check on your own PODs with something like this to gather the NFS options in use:
$ kubectl exec nfs-web-<XXXXX> -c web -- mount|grep nfs