So I'm working with a Kubernetes cluster deployed on top of Openstack VMs. The VMs have access to an NFS share which is on a separate network. Routing to the network is proxied via dnsmasq, and then the share is mounted as usual via an /etc/fstab record. This is how things look on a typical VM:
[root@myhost ~]# cat /etc/dnsmasq.d/10-nfs
server=/mynfsserver/10.35.105.240
[root@myhost ~]# cat /etc/fstab
#
# /etc/fstab
# Created by anaconda on Mon Jun 12 11:54:36 2017
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
mynfsserver:/nfs_share /nfs_share nfs defaults 0 0
[root@myhost ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether fa:16:3e:9a:84:ea brd ff:ff:ff:ff:ff:ff
inet 192.168.0.15/24 brd 192.168.0.255 scope global dynamic eth0
valid_lft 78341sec preferred_lft 78341sec
inet6 fe80::f816:3eff:fe9a:84ea/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether fa:16:3e:2c:bd:88 brd ff:ff:ff:ff:ff:ff
inet 10.35.105.10/24 brd 10.35.105.255 scope global dynamic eth2
valid_lft 69621sec preferred_lft 69621sec
inet6 fe80::f816:3eff:fe2c:bd88/64 scope link
valid_lft forever preferred_lft forever
6: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:f8:50:0c:69 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:f8ff:fe50:c69/64 scope link
valid_lft forever preferred_lft forever
So what I'd like to do now is launch some pods that have access to the underlying NFS share and I'm not really sure how to do that. The pods that I'm launching can only see the internal pod network (which I'm using Calico for), and not the host's. What would be the configurations I'd need in order to be able to mount the NFS share properly inside the pod? Thanks in advance.
I do not have experience with OpenShift so I am not sure how environment specific it would be. But to be able to access NFS from pods you need to ensure that it is visible for a pod and that the NFS already exists as the pod only accesses the NFS - if traditional ways fail, you can also try to play with routes.
Pods always see only their own network unless you expose them through a service.
First you will have to create a service to expose the NFS server to Pod/Pods. Then run the NFS server image in desired pod.
You can find detailed how-to here. I was also able to find this from official OpenShift documentation, as it seems it is pretty classic. The requirement is similar to other platforms:
Each NFS volume must be mountable by all schedulable nodes in the cluster.