Kubernetes: Using an ordinal number in a claimName?

6/19/2021

I have a statefulset that is running great and the stateful set has ReadWriteMany PVC. I need to share this PVC with another statefulset.

Does anybody know how I can add the ordinal number into the claimName.

Basically I have a backendService that is a statefulset with 2 replicas so it has a volumeClaimTemplate defined - hence it has 2 volumes service-data-service-0 and service-data-service-1 for example.

In the other statefulset - it has its own data volume but I need to share the data volume from the other statefulset.

There is a one to one mapping - meaning that the volume with ordinal 0 in the lower service needs to be added to pod0 and the same for volume with ordinal 1 to pod1.

I am little confused how I am able to do this. Its easy with a deployment, because technically you have 2 x deployments.. SO each deployment can be strictly sent to the correct service-data-service- XX (Where XX is the ordinal number of the lower server i.e 0,1 etc)

In my head, psuedo code - I have this. Can anyone help ?

      volumes:
        - name: lnd2-data-volume
          persistentVolumeClaim:

            # This volumes section is in the higher service but shares a data volume
            # with the lower service
 
            claimName: service-data-service-{{ "SOME TEMPLATE HERE to give me either 0 or 1 for the current POD ordinal number }}

Any ideas ?

-- Ian Gregson
kubernetes
kubernetes-pvc
kubernetes-statefulset
persistent-volume-claims

1 Answer

6/28/2021

To see TLDR version please go to the solution below.

What you are trying to achieve is not doable in Statefulset (STS) today.

Claims due to the design of StatefulSet controller need to have a unique identifiers, in order to be mapped to their corresponding pods, and cannot be reused between different StatefulSet applications.

So no matter, what claim name you specify within Volumes as part of Pod's template inside StatefulSet definition (e.g. claimName=service-data-service-0), it will be always overwritten by StatefulSet controller for each controlled by it Pod using the following naming scheme:

  PVC name = claim.Name + set.Name + ordinal number

where:

claim.Name - claim on the list of STS's volumeClaimTemplates matching 'volumeMount' in PodTemplateSpec

set.Name - StatefulSet name

ordinal - Pod's (replicas - 1)

My observations:

The existing PVC (of ReadWriteMany mode) can be used by STS, only when you introduce the StatefulSet for the first time in your cluster (=it's not owned yet by other workload).

For example, the STS like this one:

 apiVersion: apps/v1
  kind: StatefulSet
  metadata:
    name: peb
  spec:
  ...
  volumeClaimTemplates:
  - metadata:
      name: fileserver-claim
    spec:
      accessModes: [ "ReadWriteMany" ]
      storageClassName: ""
      resources:
        requests:
          storage: 1Gi

would consume the existing PVC:

fileserver-claim-peb-0

with accompanying event seen in API server logs:

The PVC 'fileserver-claim-peb-0' already exists

and because there cannot be any different STS of the same name (Pod 'peb-0' is unique in the cluster likewise its claimName), your options are over here.

Solution:

Pre-provision manually couple of PVs, that use the same associated storage asset (e.g NFS based volumes supporting RWX access mode) and inside your STS on the list of PVCs reference by name (volumeName) the existing unbound PV, e.g:

...
  volumeClaimTemplates:
  - metadata:
      name: datadir
    spec:
      accessModes:
        - "ReadWriteOnce"
      volumeName: fileserver-claim-peb
      resources:
        requests:
          storage: 1Gi

I think this is a recipe to share the same data storage between different StatefulSet(s).

-- Nepomucen
Source: StackOverflow