Isolating multiple mounted user-space file systems on the running microservice

2/24/2020

My situation: I have a microservice running Ubuntu 18.04 on GKE for evaluating the user's code. Every time, a user logs into her project, the service receives the user ID and project ID and it mounts a correct GCS bucket through a user-space file system based on these IDs. This service can be accessed by multiple users at the same time.

My goal: How can I achieve user isolation in a way that each user "lives" in their own filesystem and can't "see" other mounted file systems?

Ideas:

  • Run a docker container inside a docker container
  • Start pods on demand every time a new user logs in
  • Isolate users on the OS level
-- Xoroxoxoxoxoso
docker
google-cloud-platform
google-kubernetes-engine
kubernetes

1 Answer

2/25/2020

What about to let your application to spawn a Pod for each user using SecurityContext?

You could specify fsGroup (from the doc: volumes that support ownership management are modified to be owned and writable by the GID specified in fsGroup) in order to enhance the segregation you want.

The challenging part is the clean-up of old Pods no more used, so it must be supervised by your application or, in case of logout (I guess since I don't have so many details about your architecture) performs a clean-up taking advantages of the label system provided by Kubernetes.

-- prometherion
Source: StackOverflow