How to use PersistentVolume for MySQL data in Kubernetes

9/25/2018

I am developing database environment on Minikube. I'd like to persist MySQL data by PersistentVolume function of Kubernetes. However, an error will occur when starting MySQL server and will not start up, if hostPath specified /var/lib/mysql(MySQL data directory).

kubernetes-config.yaml

  apiVersion: v1
  kind: PersistentVolume
  metadata:
    name: nfs001-pv
    labels:
      app: nfs001-pv
  spec:
    capacity:
      storage: 1Gi
    accessModes:
      - ReadWriteMany
    persistentVolumeReclaimPolicy: Retain
    mountOptions:
      - hard
    nfs:
      path: /share/mydata
      server: 192.168.99.1
  ---
  apiVersion: v1
  kind: PersistentVolumeClaim
  metadata:
    name: nfs-claim
  spec:
    accessModes:
      - ReadWriteMany
    resources:
      requests:
        storage: 1Gi
    storageClassName: ""
    selector:
      matchLabels:
        app: nfs001-pv
  ---
  apiVersion: apps/v1beta2
  kind: Deployment
  metadata:
    name: sk-app
    labels:
      app: sk-app
  spec:
    replicas: 1
    selector:
      matchLabels:
        app: sk-app
    template:
      metadata:
        labels:
          app: sk-app
      spec:
        containers:
        - name: sk-app
          image: mysql:5.7
          imagePullPolicy: IfNotPresent
          env:
          - name: MYSQL_ROOT_PASSWORD
            value: password
          ports:
          - containerPort: 3306
          volumeMounts:
          - mountPath: /var/lib/mysql
            name: mydata
        volumes:
          - name: mydata
            persistentVolumeClaim:
              claimName: nfs-claim
  ---
  apiVersion: v1
  kind: Service
  metadata:
    name: sk-app
    labels:
      app: sk-app
  spec:
    type: NodePort
    ports:
    - port: 3306
      nodePort: 30001
    selector:
      app: sk-app

How can I launch it?

-- Postscript --

When I tried "kubectl logs", I got following error message.

chown: changing ownership of '/var/lib/mysql/': Operation not permitted

When I tried "kubectl describe xxx", I got following results.

kubectl describe pv:

Name:            nfs001-pv
Labels:          app=nfs001-pv
Annotations:     pv.kubernetes.io/bound-by-controller=yes
StorageClass:    
Status:          Bound
Claim:           default/nfs-claim
Reclaim Policy:  Retain
Access Modes:    RWX
Capacity:        1Gi
Message:         
Source:
  Type:      NFS (an NFS mount that lasts the lifetime of a pod)
  Server:    192.168.99.1
  Path:      /share/mydata
  ReadOnly:  false
Events:        <none>

kubectl describe pvc:

Name:          nfs-claim
Namespace:     default
StorageClass:  
Status:        Bound
Volume:        nfs001-pv
Labels:        <none>
Annotations:   pv.kubernetes.io/bind-completed=yes
               pv.kubernetes.io/bound-by-controller=yes
Capacity:      1Gi
Access Modes:  RWX
Events:        <none>

kubectl describe deployment:

Name:                   sk-app
Namespace:              default
CreationTimestamp:      Tue, 25 Sep 2018 14:22:34 +0900
Labels:                 app=sk-app
Annotations:            deployment.kubernetes.io/revision=1
Selector:               app=sk-app
Replicas:               1 desired | 1 updated | 1 total | 0 available | 1 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  app=sk-app
  Containers:
   sk-app:
    Image:  mysql:5.7
    Port:   3306/TCP
    Environment:
      MYSQL_ROOT_PASSWORD:  password
    Mounts:
      /var/lib/mysql from mydata (rw)
  Volumes:
   mydata:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  nfs-claim
    ReadOnly:   false
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      False   MinimumReplicasUnavailable
  Progressing    True    ReplicaSetUpdated
OldReplicaSets:  <none>
NewReplicaSet:   sk-app-d58dddfb (1/1 replicas created)
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  23s   deployment-controller  Scaled up replica set sk-app-d58dddfb to 1
-- Subtropics
kubernetes
mysql

1 Answer

9/25/2018

Volumes look good, so looks like you just have a permission issue on the root of your nfs volume that gets mounted as /var/lib/mysql on your container.

You can:

1) Mount that nfs volume using nfs mount commands and run a:

chmod 777 .  # This gives rwx to anybody so need to be mindful.

2) Run an initContainer in your deployment, similar to this:

apiVersion: apps/v1beta2
kind: Deployment
metadata:
  name: sk-app
  labels:
    app: sk-app
spec:
  replicas: 1
  selector:
    matchLabels:
      app: sk-app
  template:
    metadata:
      labels:
        app: sk-app
    spec:
      initContainers:
      - name: init-mysql
        image: busybox
        command: ['sh', '-c', 'chmod 777 /var/lib/mysql']
        volumeMounts:
        - mountPath: /var/lib/mysql
          name: mydata
      containers:
      - name: sk-app
        image: mysql:5.7
        imagePullPolicy: IfNotPresent
        env:
        - name: MYSQL_ROOT_PASSWORD
          value: password
        ports:
        - containerPort: 3306
        volumeMounts:
        - mountPath: /var/lib/mysql
          name: mydata
      volumes:
        - name: mydata
          persistentVolumeClaim:
            claimName: nfs-claim
            accessModes:
              - ReadWriteMany
-- Rico
Source: StackOverflow