So I have 2 PVCs in 2 namespaces binding to 1 PV:
The following are the PVCs:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-git
namespace: mlo-dev
labels:
type: local
spec:
storageClassName: mlo-git
volumeMode: Filesystem
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-git
namespace: mlo-stage
labels:
type: local
spec:
storageClassName: mlo-git
volumeMode: Filesystem
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
and the PV:
kind: PersistentVolume
apiVersion: v1
metadata:
name: pv-git
labels:
type: local
spec:
storageClassName: mlo-git
capacity:
storage: 1Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
hostPath:
path: /git
in the namespace "mlo-dev", the binding is successful:
$ kubectl describe pvc pvc-git -n mlo-dev
Name: pvc-git
Namespace: mlo-dev
StorageClass: mlo-git
Status: Bound
Volume: pv-git
Labels: type=local
Annotations: pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 1Gi
Access Modes: RWX
VolumeMode: Filesystem
Mounted By:
...
various different pods here...
...
Events: <none>
Whereas in the namespace "mlo-stage", the binding is failed with the error message: storageclass.storage.k8s.io "mlo-git" not found
$ kubectl describe pvc pvc-git -n mlo-stage
Name: pvc-git
Namespace: mlo-stage
StorageClass: mlo-git
Status: Pending
Volume:
Labels: type=local
Annotations: Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Mounted By:
...
various different pods here...
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning ProvisioningFailed 3m4s (x302 over 78m) persistentvolume-controller storageclass.storage.k8s.io "mlo-git" not found
As I know, PV is not scoped to namespace, so it should be possible for PVCs in different namespaces to bind to the same PV?
+++++ Added: +++++
when "kubectl describe pv pv-git", I got the following:
$ kubectl describe pv pv-git
Name: pv-git
Labels: type=local
Annotations: pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pv-protection]
StorageClass: mlo-git
Status: Bound
Claim: mlo-dev/pvc-git
Reclaim Policy: Retain
Access Modes: RWX
VolumeMode: Filesystem
Capacity: 1Gi
Node Affinity: <none>
Message:
Source:
Type: HostPath (bare host directory volume)
Path: /git
HostPathType:
Events: <none>
I've tried to reproduced your scenario (however if you would provide your storageclass
yaml to exact reproduction, and changed AccessMode
for tests) and in my opinion this behavior is correctly (worked as designed).
When you want to check if specific object is namespaced
you can use command:
$ kubectl api-resources | grep pv
persistentvolumeclaims pvc true PersistentVolumeClaim
persistentvolumes pv false PersistentVolume
As PVC
is true its mean pvc
is namespaced, and PV
is not.
PersistentVolumeClain and PersistentVolume are bounding in relationship 1:1. When your first PVC bounded to PV, this PV is taken
and cannot be used again in that moment. You should create second PV
. It can be changed depends on reclaimPolicy
and what happend with pop
/deployment
I guess you are using Static provisioning.
A cluster administrator creates a number of PVs. They carry the details of the real storage, which is available for use by cluster users. They exist in the Kubernetes API and are available for consumption.
In this case you have to create 1 PV
to 1 PVC
.
If you would use cloud environment, you would use Dynamic provisioning.
When none of the static PVs the administrator created match a user's PersistentVolumeClaim, the cluster may try to dynamically provision a volume specially for the PVC. This provisioning is based on StorageClasses: the PVC must request a storage class and the administrator must have created and configured that class for dynamic provisioning to occur.
As for example, on GKE I've tried to reproduce it and 1 PVC
bound to PV
. As GKE is using Dynamic provisioning
, when you defined only PVC
it used default storageclass
and automatically created PV
.
$ kubectl get pv,pvc -A
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pv-git 1Gi RWO Retain Bound mlo-dev/pvc-git mlo-git 15s
persistentvolume/pvc-e7a1e950-396b-40f6-b8d1-8dffc9a304d0 1Gi RWO Delete Bound mlo-stage/pvc-git mlo-git 6s
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mlo-dev persistentvolumeclaim/pvc-git Bound pv-git 1Gi RWO mlo-git 10s
mlo-stage persistentvolumeclaim/pvc-git Bound pvc-e7a1e950-396b-40f6-b8d1-8dffc9a304d0 1Gi RWO mlo-git 9s
Solution
To fix this issue, you should create another PersistentVolume
to bound second PVC
.
For more details about bounding you can check this topic. If you would like more information about PVC
check this SO thread.
If second PV
won't help, please provide more details about your environment (Minikube/Kubeadm, K8s version, OS, etc.) and your storageclass
YAML.