persistentVolumeReclaimPolicy on directly mounted NFS volumes - kubernetes

3/18/2019
  1. I have directly mounted NFS volume for mysql data, need to implement storage policy for retaining data across pod deletion, and to avoid any corruption. please recommend some useful.
  2. I did not find a way to enable persistentVolumeReclaimPolicy: Retain in directly mounted volumes . I know it can be done from PV/PVC creation but is it possible from statefulset volumes... Some guidelines is needed in understanding the yaml options for a particular object, where to get all the options(parameters) available for an object. currently googling for each options and trying - so hard.
  3. I could not mount a configmap file (my.cnf) to a file in the pod. it removes the underlying files in the mountpath. curious to know how it is handled generally, do we need separate mount path for each config file.

code block

apiVersion: v1
kind: Service
metadata:
  name: mymariadb
  labels:
    app: mymariadb

spec:
  ports:
  - port: 3306
    name: mysql
    targetPort: mysql
    nodePort: 30003
  type: NodePort
  selector:
    app: mymariadb

---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mymariadb
  labels:
    app: mymariadb
spec:
  serviceName: "mymariadb"
  selector:
    matchLabels:
      app: mymariadb
  template:
    metadata:
      labels:
        app: mymariadb
    spec:
      containers:
      - name: mariadb
        image: mariadb:10.3.7
        env:
        - name: MYSQL_ROOT_PASSWORD
          value: xxxx
        ports:
        - name: mysql
          containerPort: 3306
        volumeMounts:
        - name: data
          mountPath: /data
          subPath: mysql
        - name: conf
          mountPath: /etc/mysql  # /conf.d removing files
        resources:
          requests:
            cpu: 500m
            memory: 2Gi
      volumes:
      - name: data
        nfs:
          server: 10.12.32.41
          path: /data/mymariadb
        spec:
          persistentVolumeReclaimPolicy: Retain  # not taking
      - name: conf
        configMap:
          name: mycustconf
          items:
          - key: my.cnf
            path: my.cnf
-- user3369417
docker
kubernetes

1 Answer

3/18/2019

Firstly, I did not suggest nfs mount in Kubernetes platform for two reasons. From security perspective, another container can access the nfs mount on the worker nodes. The Second, from performances perspective, the connection between worker nodes and storage will be slower, to compare to another solutions. As you know, performance is so critical for db connections. I think you should evaluate that.

I suggest to you use one of the Cloud Native Storages. You can view them in the link below. Ceph and Gluster are popular products.

https://landscape.cncf.io/category=cloud-native-storage&format=card-mode&grouping=category

If you really want to continue with the nfs solution, you can check two points:

1) Did you check the access list on the storage appliance? You should see the worker nodes for the nfs mount.

2) After you try to mount the nfs storage on the worker nodes, you can try to import the deployment on your kubernetes cluster.

-- Erdi
Source: StackOverflow