Preserve directory on build docker/gcloud

11/27/2018

Blank Stare:
How can I prevent the directory structure created by an app from being deleted (overwritten) when a build is triggered by a change in an attached repo?

Scenario:
Web app running on google cloud using Docker/Kubernetes. Build is triggered by push to repo at Bitbucket.

Problem:
After the build the app is brand spanking new, dirs/files that had been created by the app are wiped away.

Unicorn Objective:
Preserve the dirs/files that the app has created, carrying them over to the new build OR skip the full build and do something similar to a git pull when the trigger fires.

Current build steps as reported by Google:

gcr.io/cloud-builders/docker
pull [details...]

gcr.io/cloud-builders/docker
build -t [details...]

gcr.io/cloud-builders/kubectl
set image [details...]
-- Jake Graff
docker
gcloud
kubernetes

1 Answer

11/28/2018

Data persistence (your application generated data) should always be handled outside of the build process.

You have a couple of options where you could "mount" a file system locally to the container:

  1. PersistentVolume/PersistentVolumeClaim (you can provision a persistent volume within your cluster and mount it within your pod(s), see: https://kubernetes.io/docs/concepts/storage/persistent-volumes)
  2. NFS (you can deploy an NFS server within your cluster and use a PVC to mount the volume to your pod(s), for a live example+video see: https://matthewdavis.io/highly-available-wordpress-on-kubernetes)
  3. Mount a google cloud storage bucket using GCS Fuse as a local file system (see: https://github.com/mateothegreat/k8-byexamples-gcsfuse)

Hope this helps! If you still need a hand please reach out and supply some more details about your applications data structure, types and sizes.

-- yomateo
Source: StackOverflow