Kubernetes: PersistentVolume And PersistentVolumeClaim - Sharing Claims

5/25/2018

This question is about the behavior of PersistentVolume and PersistentVolumeClaim configurations within Kubernetes. We have read through the documentation and are left with a few lingering questions.

We are using Azure Kubernetes Service to host our cluster and we want to provide a shared persistent storage backend for many of our Pods. We are planning on using PersistentVolumes to accomplish this.

In this scenario, we want to issue a PersistentVolume backed by an AzureFile storage resource. We will deploy Jenkins to our cluster and store the jenkins_home directory in the PersistentVolume so that our instance can survive pod and node failures. We will be running multiple Master Jenkins nodes, all configured with a similar deployment yaml.

We have created all the needed storage accounts and applicable shares ahead of time, as well as the needed secrets.

First, we issued the following PersistentVolume configuration;

apiVersion: v1
kind: PersistentVolume
metadata:
  name: jenkins-azure-file-share
  labels:
    usage: jenkins-azure-file-share
spec:
  capacity:
    storage: 100Gi
  accessModes:
   - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  azureFile:
    secretName: azure-file-secret
    shareName: jenkins
    readOnly: false
  mountOptions:
    - dir_mode=0777
    - file_mode=0777
    - uid=1000
    - gid=1000

Following that, we then issued the following PersistentVolumeClaim configuration;

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: jenkins-file-claim
  annotations:
    volume.beta.kubernetes.io/storage-class: ""
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Gi
  volumeName: "jenkins-azure-file-share"

Next, we use this claim within our deployments in the following manner;

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: jenkins-instance-name
spec:
  replicas: 1
  template:
    metadata:
      labels:
        role: jenkins
        app: jenkins-instance-name
    spec:
      containers:
      - name: jenkins-instance-name
        image: ContainerRegistry.azurecr.io/linux/jenkins_master:latest
        ports:
        - name: jenkins-port
          containerPort: 8080
        volumeMounts:
        - name: jenkins-home
          mountPath: /var/jenkins_home
          subPath: "jenkins-instance-name"
      volumes:
      - name: jenkins-home
        persistentVolumeClaim:
          claimName: "jenkins-file-claim"
      imagePullSecrets:
      - name: ImagePullSecret

This is all working as expected. We have deployed multiple Jenkins Masters to our Kubernetes cluster and each one is correctly allocating a new folder on the share specific to each master instance. enter image description here

Now for my questions


The PersistentVolume is configured with 100Gig of Storage. Does this mean that Kubernetes will only allow a maximum of 100Gig of total storage in this volume?


When the PersistentVolumeClaim is bound to the PersistentVolume, the PersistentVolumeClaim seems to show that it has 100Gig of total storage available, even though the PersistentVolumeClaim was configured for 10Gig of storage;

C:\ashley\scm\kubernetes>kubectl get pv
NAME                        CAPACITY   ACCESS MODES   RECLAIM POLICY  STATUS    CLAIM     STORAGECLASS   REASON    AGE
jenkins-azure-file-share    100Gi     RWX            Retain           Bound     default/jenkins-file-claim                          2d

C:\ashley\scm\kubernetes>kubectl get pvc
NAME                       STATUS    VOLUME                      CAPACITY   ACCESS MODES   STORAGECLASS   AGE
jenkins-homes-file-claim   Bound     jenkins-azure-file-share    100Gi     RWX                           2d

Is this just bad output from the get pvc command or am I misinterpreting the output of the get pvc command?


When sharing a PersistentVolumeClaim in this way;

  1. Does each deployment ONLY have access to the configured maximum of 10Gig of storage from the PersistentVolume's 100Gig capacity?
  2. Or, does each deployment have access to its own 10Gig slice of the total 100Gig of storage configured for the PersistentVolume?

With this configuration, what happens when a single PersistentVolumeClaim capacity gets fully utilized? Do all the Deployments using this single PersistentVolumeClaim stop working?

-- Sage
kubernetes
persistent-volume-claims
persistent-volumes

1 Answer

5/25/2018

So for the pvc it is definitely the case that it has only 10Gig available with this config. For the pv I assume it is the same but in this case I don't know for sure but should be, because of consistency. And it stops working if any of this limits are reached so if you have 11 Jenkins running it will even fail although you not reached the limit on a single pvc.

-- Jonathan Lechner
Source: StackOverflow