I have a very simple setup which I want to deploy on a Kubernetes cluster. It consists of a nginx service serving a bunch of PHP files using a php-fpm service and a mysql database service for backend. I can setup things locally with docker-compose. Here is the docker-compose.yml
:
version: "3"
services:
binom:
build: ./binom
ports:
- "8080:80"
volumes:
- code-volume:/binom
php:
build: ./php-7.3-fpm-ioncube
volumes:
- code-volume:/binom
mysql:
image: mysql:5.7
restart: always
environment:
- MYSQL_ROOT_PASSWORD=pwd
- mysql-volume:/var/lib/mysql
volumes:
mysql-volume:
code-volume:
I can immediately point a few problems with this config, which prevent me to use Kubernetes:
The code is shared between 2 containers, because both need access to it. A Docker volume is used for that purpose, but I cannot push it do Dockerhub and neither can I deploy it to Kubernetes.
For the fast deploy, mysql instance needs a prepopulated database. Again, I store the data in a volume, which I would also do in Kubernetes using Persistent Volume. But how do I deploy that volume to Kubernetes?
Here are my naive solutions to those problems, but I realize they're flawed, so I ask for better ones:
I can merge the containers, but this is a tutorial project so I want to do it in the "right" isolated and properly configured way, instead of running a single huge mono-container.
I can commit the data to a custom mysql image and use it on Kubernetes, but this seems less clean than deploying a generic mysql + provide custom data.