Kubernates persistence disk (Mongodb) wipe off when node pool upgrade

1/16/2019

I have a question about Kubernetes. When auto upgrade node, related databases are wiped off.What is the reason for that ? Thank you.

-- YMA
google-kubernetes-engine
kubernetes
yaml

2 Answers

1/16/2019

It depends on your reclaim policy. The reclaim policy for a PersistentVolume tells the cluster what to do with the volume after it has been released of its claim. When node upgrade happens it may release the volume.

You should set Retail reclaim policy in your case if you want to keep the data.

apiVersion: v1
kind: PersistentVolume
metadata:
  name: block-pv
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
-- Alex Pliutau
Source: StackOverflow

1/16/2019

Explanation about the issue's likely cause on this answer seems to be right. By default, the Reclaim Policy is set to Delete.

I was unable to change the setting at volumeClaimTemplates.spec level (I get this error: unknown field "persistentVolumeReclaimPolicy" in io.k8s.api.core.v1.PersistentVolumeClaimSpec).

What I found to be allowed is to change the Reclaim Policy on an existing PV by locating it and running:

kubectl patch pv <pv-name> -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'

You could also create a new StorageClass with the desired reclaim policy for new PVs or PVCs.

-- Ruben N.
Source: StackOverflow