Connecting remote filesystem securely to Kubernetes Cluster

10/3/2019

Here is the situation I am facing. I work for a company that is designing a product in which, due to legal constraints, certain pieces of data need to reside on physical machines in specific geopolitical jurisdictions. For example, some of our data must reside on machines within the borders of the "Vulgarian Federation".

We are using Kubernetes to host the system, and will probably settle on either GKE or AWS as the cloud provider.

A solution I have invented creates a pod to host a MongoDB instance that is locale specific (say, Vulgaria-MongoDB), which then seamlessly stores the data on physical drives in that locale. My plan is to export the storage from the Vulgarian machine to our Kubernetes cluster using NFS.

The problem that I am facing is that I cannot find a secure means of achieving this NFS export. I know that NFSv4 supports Kerberos, but I do not believe that NFS was ever intended to be used over the open web, even with Kerberos. Another option would be creating a VPN server in the cluster and adding the remote machine to the VPN. I have also considered SSHFS, but I think it would be too unstable for this particular use case. What would be an efficient & secure way to accomplish this task?

-- Austin Benesh
kubernetes
nfs
ssl

1 Answer

10/4/2019

As mentioned in the comment, running the database far away from the storage is likely to result in all kinds of weirdness. Modern DB engines allow for some storage latency, but not 10s of seconds generally. But if you must, the VPN approach is correct, some kind of protected network bridge. I don't know of any remote storage protocols I would trust over the internet.

-- coderanger
Source: StackOverflow