I have a multi-tenant app in Google Container Engine that has a single source code and different files(directories in designated GC Storage bucket) and database(GC SQL). Each tenant has a config file with sql credentials and additional configuration that is determined by the domain in the (http/s)request.
These config files are small, about 500 bytes, and I am trying to figure out how to distribute them to multiple pods so that any change to them(config added, removed or changed) will be propagated ASAP.
So far the best sounding solution is to use persistent disk from GCE which can be mounted as read only to any number of readers and then having one pod with single container where I would mount the volume as writable(there can be only one writer), use gsutil rsync
invoked by nodeJS or cron in some short intervals to synchronize files from a bucket where my managing backend would write them.
The reason I haven't done this yet is that this is not cloud-friendly solution since there is a single point of failure, basically. The pods should be self-sufficient. And also the drive cannot be smaller than 10GB but my files would take just a few MBs at the most and so it seems like a huge waste of resources.
Google unfortunately does not have anything like Amazon Elasticache so I am unsure how to design this.
Any thoughts?
Elasticache is basically a Redis cluster, so you could spin your own Redis Master/Slave or Cluster configuration to do the same in Kubernetes/GKE.
You could also have a central SQL or other DB for these configs, with replication for redundancy, or any other sort of Key/Value store like etcd, consul, zookeeper...
Lots of options there.
All these solution being replicated/distributed, you could live with just local storage (emptyDir) although it might be good to have some sort of backup strategy.