Kubernetes Cassandra Datacenter deletes PVC while deleting Datacenter

2/11/2022

I have cassandra operator installed and I setup cassandra datacenter/cluster with 3 nodes. I have created sample keyspace, table and inserted the data. I see it has created 3 PVC's in my storage section. When I deleting the dataceneter its delete associated PVC's as well ,So when I setup same configuration Datacenter/cluster , its completely new , No earlier keyspace or tables. How can I make them persistence for future use? I am using sample yaml from below https://github.com/datastax/cass-operator/tree/master/operator/example-cassdc-yaml/cassandra-3.11.x

I don't find any persistentVolumeClaim configuration in it , Its having storageConfig: cassandraDataVolumeClaimSpec: Is anyone came across such scenario?

Edit: Storage class details:

allowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass 
metadata:
  annotations:
    description: Provides RWO and RWX Filesystem volumes with Retain Policy
  storageclass.kubernetes.io/is-default-class: "false"
  name: ocs-storagecluster-cephfs-retain
parameters:
  clusterID: openshift-storage
  csi.storage.k8s.io/controller-expand-secret-name: rook-csi-cephfs-provisioner 
  csi.storage.k8s.io/controller-expand-secret-namespace: openshift-storage
  csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node
  csi.storage.k8s.io/node-stage-secret-namespace: openshift-storage
  csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner
  csi.storage.k8s.io/provisioner-secret-namespace: openshift-storage
  fsName: ocs-storagecluster-cephfilesystem
provisioner: openshift-storage.cephfs.csi.ceph.com
reclaimPolicy: Retain
volumeBindingMode: Immediate

Here is Cassandra cluster YAML:

    apiVersion: cassandra.datastax.com/v1beta1
kind: CassandraDatacenter
metadata:
  name: dc
  generation: 2
spec:
  size: 3
  config:
    cassandra-yaml:
      authenticator: AllowAllAuthenticator
      authorizer: AllowAllAuthorizer
      role_manager: CassandraRoleManager
    jvm-options:
      additional-jvm-opts:
        - '-Ddse.system_distributed_replication_dc_names=dc1'
        - '-Ddse.system_distributed_replication_per_dc=1'
      initial_heap_size: 800M
      max_heap_size: 800M
  resources: {}
  clusterName: cassandra
  systemLoggerResources: {}
  configBuilderResources: {}
  serverVersion: 3.11.7
  serverType: cassandra
  storageConfig:
    cassandraDataVolumeClaimSpec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 10Gi
      storageClassName: ocs-storagecluster-cephfs-retain
  managementApiAuth:
    insecure: {}

EDIT: PV Details:

oc get pv pvc-15def0ca-6cbc-4569-a560-7b9e89a7b7a7 -o yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  annotations:
    pv.kubernetes.io/provisioned-by: openshift-storage.cephfs.csi.ceph.com
  creationTimestamp: "2022-02-23T20:52:54Z"
  finalizers:
  - kubernetes.io/pv-protection
  managedFields:
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .: {}
          f:pv.kubernetes.io/provisioned-by: {}
      f:spec:
        f:accessModes: {}
        f:capacity:
          .: {}
          f:storage: {}
        f:claimRef:
          .: {}
          f:apiVersion: {}
          f:kind: {}
          f:name: {}
          f:namespace: {}
          f:resourceVersion: {}
          f:uid: {}
        f:csi:
          .: {}
          f:controllerExpandSecretRef:
            .: {}
            f:name: {}
            f:namespace: {}
          f:driver: {}
          f:nodeStageSecretRef:
            .: {}
            f:name: {}
            f:namespace: {}
          f:volumeAttributes:
            .: {}
            f:clusterID: {}
            f:fsName: {}
            f:storage.kubernetes.io/csiProvisionerIdentity: {}
            f:subvolumeName: {}
          f:volumeHandle: {}
        f:persistentVolumeReclaimPolicy: {}
        f:storageClassName: {}
        f:volumeMode: {}
    manager: csi-provisioner
    operation: Update
    time: "2022-02-23T20:52:54Z"
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:status:
        f:phase: {}
    manager: kube-controller-manager
    operation: Update
    time: "2022-02-23T20:52:54Z"
  name: pvc-15def0ca-6cbc-4569-a560-7b9e89a7b7a7
  resourceVersion: "51684941"
  selfLink: /api/v1/persistentvolumes/pvc-15def0ca-6cbc-4569-a560-7b9e89a7b7a7
  uid: 8ded2de5-6d4e-45a1-9b89-a385d74d6d4a
spec:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 10Gi
  claimRef:
    apiVersion: v1
    kind: PersistentVolumeClaim
    name: server-data-cstone-cassandra-cstone-dc-default-sts-1
    namespace: dv01-cornerstone
    resourceVersion: "51684914"
    uid: 15def0ca-6cbc-4569-a560-7b9e89a7b7a7
  csi:
    controllerExpandSecretRef:
      name: rook-csi-cephfs-provisioner
      namespace: openshift-storage
    driver: openshift-storage.cephfs.csi.ceph.com
    nodeStageSecretRef:
      name: rook-csi-cephfs-node
      namespace: openshift-storage
    volumeAttributes:
      clusterID: openshift-storage
      fsName: ocs-storagecluster-cephfilesystem
      storage.kubernetes.io/csiProvisionerIdentity: 1645064620191-8081-openshift-storage.cephfs.csi.ceph.com
      subvolumeName: csi-vol-92d5e07d-94ea-11ec-92e8-0a580a20028c
    volumeHandle: 0001-0011-openshift-storage-0000000000000001-92d5e07d-94ea-11ec-92e8-0a580a20028c
  persistentVolumeReclaimPolicy: Retain
  storageClassName: ocs-storagecluster-cephfs-retain
  volumeMode: Filesystem
status:
  phase: Bound
-- Sanjay Bagal
cassandra
kubernetes
kubernetes-operator
openshift

1 Answer

2/12/2022

According to the spec:

The storage configuration. This sets up a 100GB volume at /var/lib/cassandra on each server pod. The user is left to create the server-storage storage class by following these directions... https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/ssd-pd

Before you deploy the Cassandra spec, first ensure your cluster already have the CSI driver installed and working properly, then proceed to create the StorageClass as the spec required:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: server-storage
provisioner: pd.csi.storage.gke.io
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
reclaimPolicy: Retain
parameters:
  type: pd-ssd

Re-deploy your Cassandra now should have the data disk retain upon deletion.

-- gohm'c
Source: StackOverflow