Kubernetes -- Helm -- Mysql Chart loses stored data after stopping pod

10/15/2019

Using https://github.com/helm/charts/tree/master/stable/mysql (all the code is here), it is cool being able to run mysql as part of my local kubernetes cluster (using docker kubernetes).

The problem though is that once I stop running the pod, and then run the pod again, all the data that was stored is now gone.

My question is how do I keep the data that was added to the mysql pod? I have read about persistent volumes, and the mysql helm example from github is showing that it is using PersistentVolumeClaim. I have also enabled persistence on the values.yaml file, but I cannot seem to have the same data that was saved in the database.

My docker kubernetes version is currently 1.14.6.

-- Binyata
kubernetes
kubernetes-helm
mysql

1 Answer

10/16/2019

Please verify your msql POD You should notice volumes and volumesMount options:

    volumeMounts:
    - mountPath: /var/lib/mysql
      name: data
.
.
.
  volumes:
  - name: data
    persistentVolumeClaim:
      claimName: msq-mysql

In additions please verify your PersistentVolume and PersistentVolumeClaim, storageClass:

kubectl get pv,pvc,pods,sc:

NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM               STORAGECLASS   REASON   AGE
persistentvolume/pvc-2c6aa172-effd-11e9-beeb-42010a840083   8Gi        RWO            Delete           Bound    default/msq-mysql   standard                24m

NAME                              STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/msq-mysql   Bound    pvc-2c6aa172-effd-11e9-beeb-42010a840083   8Gi        RWO            standard       24m

NAME                            READY   STATUS    RESTARTS   AGE     IP         NODE                                  NOMINATED NODE   READINESS GATES
pod/msq-mysql-b5c48c888-pz6p2   1/1     Running   0          4m28s   10.0.0.8   gke-te-1-default-pool-36546f4e-5rgw   <none>           <none>

Please run kubectl describe persistentvolumeclaim/msq-mysql (in your example you should change the pvc name)

You can notice that pvc was provisioned successfully using gce-pd and mounted by msq-mysql POD.

 Normal     ProvisioningSucceeded  26m   persistentvolume-controller  Successfully provisioned volume pvc-2c6aa172-effd-11e9-beeb-42010a840083 using kubernetes.io/gce-pd
Mounted By:  msq-mysql-b5c48c888-pz6p2

I have created table with on row, deleted the pod and verified after that (as expected everything is alright):

mysql> SELECT * FROM t;
+------+
| c    |
+------+
| ala  |
+------+
1 row in set (0.00 sec)

Why: all the data that was stored is now gone.

As per helm chart docs:

The MySQL image stores the MySQL data and configurations at the /var/lib/mysql path of the container.

By default a PersistentVolumeClaim is created and mounted into that directory. In order to disable this functionality you can change the values.yaml to disable persistence and use an emptyDir instead.

Mostly there is problem with pv,pvc binding. It can be also problem with user defined or non default storageClass.

  • So please verify pv,pvc as stated above.
  • Take a look at StorageClass

    A claim can request a particular class by specifying the name of a StorageClass using the attribute storageClassName. Only PVs of the requested class, ones with the same storageClassName as the PVC, can be bound to the PVC.

    PVCs don’t necessarily have to request a class. A PVC with its storageClassName set equal to "" is always interpreted to be requesting a PV with no class, so it can only be bound to PVs with no class (no annotation or one set equal to ""). A PVC with no storageClassName is not quite the same and is treated differently by the cluster, depending on whether the DefaultStorageClass admission plugin is turned on.

-- Hanx
Source: StackOverflow