Restart Kubernetes petset will clean the persistent volume

3/1/2017

I am running 3 zookeepers petset which volume are using glusterfs persistent volume. Everything is good if it's the first time you start the petset.

One of my requirements is that if the petset is killed, then after I restart it, they will be still using the same persistent volume.

The problem I am facing now is that after I restart petset, the original data in persistent volume will be cleaned. So how can I solve this problem instead of manually copying the file out of that volume? and I tried reclaimPolicy retain and delete and both of them will clean the volume. Thanks.

Below are the configurations files.

pv

apiVersion: v1
kind: PersistentVolume
metadata:
  name: glusterfsvol-zookeeper-0
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteMany
  glusterfs:
    endpoints: gluster-cluster
    path: zookeeper-vol-0
    readOnly: false
  persistentVolumeReclaimPolicy: Retain
  claimRef:
    name: glusterfsvol-zookeeper-0
    namespace: default
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: glusterfsvol-zookeeper-1
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteMany
  glusterfs:
    endpoints: gluster-cluster
    path: zookeeper-vol-1
    readOnly: false
  persistentVolumeReclaimPolicy: Retain
  claimRef:
    name: glusterfsvol-zookeeper-1
    namespace: default
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: glusterfsvol-zookeeper-2
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteMany
  glusterfs:
    endpoints: gluster-cluster
    path: zookeeper-vol-2
    readOnly: false
  persistentVolumeReclaimPolicy: Retain
  claimRef:
    name: glusterfsvol-zookeeper-2
    namespace: default

pvc

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: glusterfsvol-zookeeper-0
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
       storage: 1Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: glusterfsvol-zookeeper-1
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
       storage: 1Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: glusterfsvol-zookeeper-2
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
       storage: 1Gi

petset

apiVersion: apps/v1alpha1
kind: PetSet
metadata:
  name: zookeeper
spec:
  serviceName: "zookeeper"
  replicas: 1
  template:
    metadata:
      labels:
        app: zookeeper
      annotations:
        pod.alpha.kubernetes.io/initialized: "true"
    spec:
      terminationGracePeriodSeconds: 0
      containers:
      - name: zookeeper
        securityContext:
          privileged: true
          capabilities:
            add:
              - IPC_LOCK
        image: kuanghaochina/zookeeper-3.5.2-alpine-jdk:latest
        imagePullPolicy: Always
        ports:
          - containerPort: 2888
            name: peer
          - containerPort: 3888
            name: leader-election
          - containerPort: 2181
            name: client
        env:
        - name: ZOOKEEPER_LOG_LEVEL
          value: INFO
        volumeMounts:
        - name: glusterfsvol
          mountPath: /opt/zookeeper/data
          subPath: data
        - name: glusterfsvol
          mountPath: /opt/zookeeper/dataLog
          subPath: dataLog
  volumeClaimTemplates:
  - metadata:
      name: glusterfsvol
    spec:
      accessModes: 
        - ReadWriteMany
      resources:
        requests:
          storage: 1Gi

The reason found is that I am using zkServer-initialize.sh to force zookeeper use id, but in the script, it will clean the dataDir.

-- HAO
apache-zookeeper
glusterfs
kubernetes

1 Answer

3/2/2017

The reason found is that I am using zkServer-initialize.sh to force zookeeper use id, but in the script, it will clean the dataDir.

-- HAO
Source: StackOverflow