Restoring wordpress and mysql data to kubernetes volume

5/16/2018

I am currently running mysql, wordpress and my custom node.js + express application on kubernetes pods in the same cluster. Everything is working quite well but my problem is that all the data will be reset if I have to rerun the deployments, services and persistent volumes.

I have configured wordpress quite extensively and would like to save all the data and insert it again after redeploying everything. How is this possible to do or am I thinking something wrong? I am using the mysql:5.6 and wordpress:4.8-apache images.

I also want to transfer my configuration to my other team members so they don't have to configure wordpress again.

This is my mysql-deploy.yaml

apiVersion: v1
kind: Service
metadata:
  name: wordpress-mysql
  labels:
    app: wordpress
spec:
  ports:
    - port: 3306
  selector:
    app: wordpress
    tier: mysql
  clusterIP: None
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mysql-pv-claim
  labels:
    app: wordpress
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 20Gi
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
  name: wordpress-mysql
  labels:
    app: wordpress
spec:
  selector:
    matchLabels:
      app: wordpress
      tier: mysql
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: wordpress
        tier: mysql
    spec:
      containers:
      - image: mysql:5.6
        name: mysql
        env:
        - name: MYSQL_ROOT_PASSWORD
          value: hidden
        ports:
        - containerPort: 3306
          name: mysql
        volumeMounts:
        - name: mysql-persistent-storage
          mountPath: /var/lib/mysql

      volumes:
      - name: mysql-persistent-storage
        persistentVolumeClaim:
          claimName: mysql-pv-claim

This the wordpress-deploy.yaml

apiVersion: v1
kind: Service
metadata:
  name: wordpress
  labels:
    app: wordpress
spec:
  ports:
    - port: 80
  selector:
    app: wordpress
    tier: frontend
  type: NodePort
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: wp-pv-claim
  labels:
    app: wordpress
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 20Gi
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
  name: wordpress
  labels:
    app: wordpress
spec:
  selector:
    matchLabels:
      app: wordpress
      tier: frontend
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: wordpress
        tier: frontend
    spec:
      containers:
      - image: wordpress:4.8-apache
        name: wordpress
        env:
        - name: WORDPRESS_DB_HOST
          value: wordpress-mysql
        - name: WORDPRESS_DB_PASSWORD
          value: hidden
        ports:
        - containerPort: 80
          name: wordpress
        volumeMounts:
        - name: wordpress-persistent-storage
          mountPath: /var/www/html
      volumes:
      - name: wordpress-persistent-storage
        persistentVolumeClaim:
          claimName: wp-pv-claim
-- Samuli Lehtonen
docker
kubernetes
mysql
wordpress

1 Answer

5/16/2018

How is this possible to do or am I thinking something wrong?

It might be better to move configuration mindset from working directly on base container instances to configuring container images/manifests. You have several approaches there, just some pointers:

  • Create own Dockerfile based on images you referenced and bundle configuration files inside them. This is viable approach if configuration is more or less static and can be handled with env vars or infrequent builds of docker images, but require docker registry handling to work with k8s. In this approach you would add all changed files to build context of docker and then COPY them to appropriate places.

  • Create ConfigMaps and mount them on container filesystem as config files where change is required. This way you can still use base images you reference directly but changes are limited to kubernetes manifests instead of rebuilding docker images. Approach in this case would be to identify all changed files on container, then create kubernetes ConfigMaps out of them and finally mount appropriately. I don't know which exactly things you are changing but here is example of how you can place nginx config in ConfigMap:

    kind: ConfigMap
    apiVersion: v1
    metadata:
      name: cm-nginx-example
    data:
      nginx.conf: |
    
         server {
            listen 80;
    
            ...
            # actual config here
            ...
    
          }
    

    and then mount it in container in appropriate place like so:

    ...
    containers:
     - name: nginx-example
       image: nginx
       ports:
       - containerPort: 80
       volumeMounts:
         - mountPath: /etc/nginx/conf.d
           name: nginx-conf
     volumes:
     - name: nginx-conf
       configMap:
         name: cm-nginx-example
         items:
         - key: nginx.conf
           path: nginx.conf
    ...
  • Mount persistent volumes (subpaths) on places where you need configs and keep configuration on persistent volumes.

Personally, I'd probably opt for ConfigMaps since you can easily share/edit those with k8s deployments and configuration details are not lost as some mystical 'extensive work' but can be reviewed, tweaked and stored to some code versioning system for version tracking...

-- Const
Source: StackOverflow