I am trying to use the VolumeSnapshot
backup mechanism promoted in kubernetes
to beta
from 1.17
.
Here is my scenario:
Create the nginx deployment and the PVC used by it
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
volumeMounts:
- name: my-pvc
mountPath: /root/test
volumes:
- name: my-pvc
persistentVolumeClaim:
claimName: nginx-pvc
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
finalizers: null
labels:
name: nginx-pvc
name: nginx-pvc
namespace: default
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
storageClassName: premium-rwo
Exec into the running nginx
container, cd into the PVC mounted path and create some files
▶ k exec -it nginx-deployment-84765795c-7hz5n bash
root@nginx-deployment-84765795c-7hz5n:/# cd /root/test
root@nginx-deployment-84765795c-7hz5n:~/test# touch {1..10}.txt
root@nginx-deployment-84765795c-7hz5n:~/test# ls
1.txt 10.txt 2.txt 3.txt 4.txt 5.txt 6.txt 7.txt 8.txt 9.txt lost+found
root@nginx-deployment-84765795c-7hz5n:~/test#
Create the following VolumeSnapshot
using as source the nginx-pvc
apiVersion: snapshot.storage.k8s.io/v1beta1
kind: VolumeSnapshot
metadata:
namespace: default
name: nginx-volume-snapshot
spec:
volumeSnapshotClassName: pd-retain-vsc
source:
persistentVolumeClaimName: nginx-pvc
The VolumeSnapshotClass
used is the following
apiVersion: snapshot.storage.k8s.io/v1beta1
deletionPolicy: Retain
driver: pd.csi.storage.gke.io
kind: VolumeSnapshotClass
metadata:
creationTimestamp: "2020-09-25T09:10:16Z"
generation: 1
name: pd-retain-vsc
and wait until it becomes readyToUse: true
apiVersion: v1
items:
- apiVersion: snapshot.storage.k8s.io/v1beta1
kind: VolumeSnapshot
metadata:
creationTimestamp: "2020-11-04T09:38:00Z"
finalizers:
- snapshot.storage.kubernetes.io/volumesnapshot-as-source-protection
generation: 1
name: nginx-volume-snapshot
namespace: default
resourceVersion: "34170857"
selfLink: /apis/snapshot.storage.k8s.io/v1beta1/namespaces/default/volumesnapshots/nginx-volume-snapshot
uid: ce1991f8-a44c-456f-8b2a-2e12f8df28fc
spec:
source:
persistentVolumeClaimName: nginx-pvc
volumeSnapshotClassName: pd-retain-vsc
status:
boundVolumeSnapshotContentName: snapcontent-ce1991f8-a44c-456f-8b2a-2e12f8df28fc
creationTime: "2020-11-04T09:38:02Z"
readyToUse: true
restoreSize: 8Gi
kind: List
metadata:
resourceVersion: ""
selfLink: ""
Delete the nginx
deployment and the initial PVC
▶ k delete pvc,deploy --all
persistentvolumeclaim "nginx-pvc" deleted
deployment.apps "nginx-deployment" deleted
Create a new PVC, using the previously created VolumeSnapshot
as its dataSource
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
finalizers: null
labels:
name: nginx-pvc-restored
name: nginx-pvc-restored
namespace: default
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
dataSource:
name: nginx-volume-snapshot
kind: VolumeSnapshot
apiGroup: snapshot.storage.k8s.io
▶ k create -f nginx-pvc-restored.yaml
persistentvolumeclaim/nginx-pvc-restored created
▶ k get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
nginx-pvc-restored Bound pvc-56d0a898-9f65-464f-8abf-90fa0a58a048 8Gi RWO standard 39s
Set the name of the new (restored) PVC to the nginx deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
volumeMounts:
- name: my-pvc
mountPath: /root/test
volumes:
- name: my-pvc
persistentVolumeClaim:
claimName: nginx-pvc-restored
and create the Deployment
again
▶ k create -f nginx-deployment-restored.yaml
deployment.apps/nginx-deployment created
cd
into the PVC mounted directory. It should contain the previously created files but its empty
▶ k exec -it nginx-deployment-67c7584d4b-l7qrq bash
root@nginx-deployment-67c7584d4b-l7qrq:/# cd /root/test
root@nginx-deployment-67c7584d4b-l7qrq:~/test# ls
lost+found
root@nginx-deployment-67c7584d4b-l7qrq:~/test#
▶ k version
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.12", GitCommit:"5ec472285121eb6c451e515bc0a7201413872fa3", GitTreeState:"clean", BuildDate:"2020-09-16T13:39:51Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"17+", GitVersion:"v1.17.12-gke.1504", GitCommit:"17061f5bd4ee34f72c9281d49f94b4f3ac31ac25", GitTreeState:"clean", BuildDate:"2020-10-19T17:00:22Z", GoVersion:"go1.13.15b4", Compiler:"gc", Platform:"linux/amd64"}
This is a community wiki answer posted for more clarity of the current problem. Feel free to expand on it.
As mentioned by @pkaramol, this is an on-going issue registered under the following thread:
Creating an intree PVC with datasource should fail #96225
What happened: In clusters that have intree drivers as the default storageclass, if you try to create a PVC with snapshot data source and forget to put the csi storageclass in it, then an empty PVC will be provisioned using the default storageclass.
What you expected to happen: PVC creation should not proceed and instead have an event with an incompatible error, similar to how we check proper csi driver in the csi provisioner.
This issue has not yet been resolved at the moment of writing this answer.