How to get files into pod?

3/2/2017

I have a fully functioning Kubernetes cluster with one master and one worker, running on CoreOS.

Everything is working and my pods and services are running fine. Now I have no clue how to proceed in a webserver idea.

Before I go further: I have no configs at the moment about my idea I am going to explain. I just did a lot of research.

When setting up a pod (nginx) with a service. You get the default nginx page. After that you can setup a mount volume with a hostvolume (volume mapping from host to container).

But lets say I want to seperate every site (multiple sites separated with different pods), how can I let my users add files to their pod/nginx document root? Having FTP in the CoreOS node removes the Kubernetes way and adds security vulnerabilities.

If someone can help me shed some light on this issue, that would be great.

Thanks for your time.

-- Jeffrey Descan
coreos
kubernetes

2 Answers

3/3/2017

I'm assuming that you want to have multiple nginx servers running. The content of each nginx server is managed by a different admin (you called them users).

TL;DR:

Option 1: Each admin needs to build their own nginx docker image every time the static files change and deploy that new image. This is if you consider these static files as a part of the source-code of the nginx application

Option 2: Use a persistent volume for nginx, the init-script for the nginx image should use something like s3 to sync all its files with s3 and then start nginx


Before you proceed with building an application with kubernetes. The most important thing is to separate your services into 2 conceptual categories, and give up your desire to touch the underlying nodes directly:

1) Stateless: These are services that are built by the developers and can be released. They can be stopped, started, moved from one node to another, their filesystem can be reset during restart and they will work perfectly fine. Majority of your web-services will fit this category.

2) Stateful: These services cannot be stopped and restarted willy nilly like the ones above. Primarily, their underlying filesystem must be persistent and remain the same across runs of the service. Databases, file-servers and similar services are in this category. These need special care and should use k8s persistent-volumes and now stateful-sets.

Typical application:

  • nginx: build the nginx.conf into the docker image, and deploy it as a stateless service
  • rails/nodejs/python service: build the source code into the docker image, configure with env-vars, deploy as a stateless service
  • database: mount a persistent volume, configure with env-vars, deploy as a stateful service.

Separate sites:

  • Typically, I think at a k8s deployment and a k8s service level. Each site can be one k8s deployment and k8s service set. You can then have separate ways to expose them (different external DNS/IPs)

Application users storing files:

  • This is firmly in the category of a stateful service. Use a persistent volume to mount to a /media kind of directory

Developers changing files:

  • Say developers or admins want to use FTP to change the files that nginx serves. The correct pattern is to build a docker image with the new files and then use that docker image. If there are too many files, and you don't consider those files to be a part of the 'source' of the nginx, then use something like s3 and a persistent volume. In your docker image init script, don't directly start nginx. Contact s3, sync all your files onto your persistent volume, then start nginx.
-- iamnat
Source: StackOverflow

3/3/2017

While the options and reasoning listed by iamnat are right, there's at least one more option to add to the list. You could consider using ConfigMap objects, maintain your file within the configmap and mount them to your containers.

A good example can be found in the official documentation - check the Real World Example configuring Redis section to get some actionable input.

-- pagid
Source: StackOverflow