Migrate to kubernetes

6/5/2018

We're planning to migrate our software to run in kubernetes with auto scalling, this is our current infrastructure:

  1. PHP and apache are running in Google Compute Engine n1-standard-4 (4 vCPUs, 15 GB memory)
  2. MySql is running in Google Cloud SQL
  3. Data files (csv, pdf) and the code are storing in a single SSD Persistent Disk

I found many posts that recomments to store the data file in the Google Cloud Storage and use the API to fetch the file and uploading to the bucket. We have very limited time so I decide to use NFS to share the data files over the pods, the problem is nfs speed is slow, it's around 100mb/s when I copying the file with pv, the result from iperf is 1.96 Gbits/sec.Do you know how to achieve the same result without implement the cloud storage? or increase the NFS speed?

-- Jackson Tong
google-compute-engine
kubernetes

1 Answer

6/7/2018

Data files (csv, pdf) and the code are storing in a single SSD Persistent Disk

There's nothing stopping you from volume mounting an SSD into the Pod so you can continue to use an SSD. I can only speak to AWS terminology, but some EC2 instances come with "local" SSD hardware, and thus you would only need to use a nodeSelector to ensure your Pods were scheduled onto machines that had said local storage available.

Where you're going to run into problems is if you are currently just using one php+apache and thus just one SSD, but now you want to scale the application up and it requires that all php+apache have access to the same SSD. That's a classic distributed application architecture problem, and something kubernetes itself can't fix for you.

If you're willing to expend the effort, you can also try any one of the other distributed filesystems (Ceph, GlusterFS, etc) and see if they perform better for your situation. Then again, "We have very limited time" I guess pretty much means that's off the table.

-- mdaniel
Source: StackOverflow