I have a GCE Container Cluster composed of 3 nodes. On every node I run a POD like that one:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: test-deployment
spec:
replicas: 3
template:
metadata:
labels:
app: none
track: stable
spec:
containers:
- name: hello
image: gcr.io/persistent-volumes-test/alpine:v1.2
resources:
limits:
cpu: 0.2
memory: "10Mi"
volumeMounts:
- mountPath: "/persistentDisk"
name: persistent-disk
ports:
- containerPort: 65535
name: anti-affinity
hostPort: 65535
volumes:
- name: persistent-disk
persistentVolumeClaim:
claimName: myclaim
The trick of defining the "anti-affinity" port ensures that every POD runs on a different node. I've created 3 PersistentVolume defined like this:
kind: PersistentVolume
apiVersion: v1
metadata:
name: persistent-volume-1
annotations:
volume.beta.kubernetes.io/storage-class: "slow"
labels:
release: "dev"
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
gcePersistentDisk:
pdName: persistent-disk-1
fsType: ext4
and they are well deployed
NAME CAPACITY ACCESSMODES STATUS CLAIM REASON AGE
persistent-volume-1 10Gi RWO Released default/myclaim 13h
persistent-volume-2 10Gi RWO Released default/myclaim 5h
persistent-volume-3 10Gi RWO Available 5h
the claim definition is the following:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: myclaim
annotations:
volume.beta.kubernetes.io/storage-class: "slow"
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
selector:
matchLabels:
release: "dev"
What I noticed is that the claim bounds only to one of the volumes I created, so, only one of my PODS can get successfully deployed. What I expected was that the claim, when used by a POD, would have found one available volume to bound to, matching selectors rules. In other words, what I've interpreted of PersistentVolumeClaims is that a POD use claim to search an available volume in a set o PersistentVolumes matching PVC specs. So that's my question:
can the same PersistentVolumeClaim be used by differente instances of the same POD to be connected to different PersistentVolumes? Or the claim is bound to one and only one volume once it is created and cannot bound to any other volume?
If the right answer is the second, how can I make a POD to be dynamically bound to a PersistentVolume (chosen form a set) when deployed whitout creating a claim per POD and thus avoiding to create a specific POD for every volume I need to connect to?
A PersistentVolumeClaim
reserves a specific instance of storage that satisfies its request. Using that same PersistentVolumeClaim
in multiple Pods
will attempt to use the same bound PersistentVolume
in each of the Pods
, which will not be possible in the case of a gcePersistentDisk
.
Try creating a separate PersistentVolumClaim
for each Pod
.
The Lifecycle section of the Persistent Volumes doc provides a nice overview.