I’m trying to setup preview environments for my pull requests. Each environment needs its own prepopulated database.
My seed database is about 15GB.
I have a process to bootstrap a MySQL image and copy the /var/lib/mysql
contents to a PVC volume (I also have this in a tarball).
I need to find a way to make new PVC which are populated with this data. To me I see a few options:
I'm struggling to get any of these to work on GKE. Has anyone managed to achieve the above? I can't mount in the sql file as it simply takes too long to create the database from it - I need to mount in the database files directly.
I spent some time trying to get the CSI drivers working, but it seems I can't find a reasonable how-to guide.
Using advice from @yvesonline I was able to achieve option 1 above.
gcloud compute disks snapshot [PD-name] --zone=[zone] --snapshot-names=mysql-seed-snapshot-21022020 --description="Snapshot of the /var/lib/mysql folder"
gcloud compute disks create pvc-example-1 --source-snapshot=mysql-seed-snapshot-21022020 --zone=europe-west2-a
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-demo
spec:
persistentVolumeReclaimPolicy: Delete
storageClassName: ""
capacity:
storage: 30Gi
accessModes:
- ReadWriteOnce
gcePersistentDisk:
pdName: pvc-example-1
fsType: ext4
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-claim-demo
spec:
# It's necessary to specify "" as the storageClassName
# so that the default storage class won't be used, see
# https://kubernetes.io/docs/concepts/storage/persistent-volumes/#class-1
storageClassName: ""
volumeName: pv-demo
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 30Gi
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.7
name: mysql
env:
# Use secret in real usage
- name: MYSQL_ROOT_PASSWORD
value: root
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: task-pv-storage
mountPath: /var/lib/mysql
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: pv-claim-demo
Once the volume cloning in K8s is more established in GKE this will be easier, but this solution will do for the mean time!