As part of the PetSet definition, the volumeClainTemplates are defined for Kubernetes to dynamically generate Persistent Volume Claims. For example:
volumeClaimTemplates:
- metadata:
name: datadir
annotations:
volume.alpha.kubernetes.io/storage-class: anything
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 24Gi
However, I already has a few of the Persistent Volumes defined:
#kubectl get pv
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE
pv-1-rw 24Gi RWO Retain Bound rnd/pvc-1-rw 1h
pv-2-rw 24Gi RWO Retain Bound rnd/pvc-2-rw 6d
pv-3-rw 24Gi RWO Retain Bound rnd/pvc-3-rw 6d
...
I would like the Kubernetes to choose the persistent volumes from the existing ones rather than dynamically creating new ones.
I'm using Kubernetes 1.4.3. Does anyone know how to do that?
A simple solution is to manually create the PVCs and the PVs. If your PVC are named html-nginx-0
to html-nginx-N
the volume claim template will use them instead of creating new ones and everything will be fine.
You can use kind https://kind.sigs.k8s.io/, where dynamic volume provisionning is enabled in order to get the yaml examples for PVs and PVCs. And adapt PVs to your storage.
Here's how to use a select a volume by label
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
selector:
matchLabels:
data-label: database-1
volumeClaimTemplates
is an array of PersistentVolumeClaim
. You can try to define them using selector
and label existing volumes somehow, i.e.:
kind: PersistentVolume
apiVersion: v1
metadata:
name: pv0001
labels:
foo: foo
bar: bar
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 5Gi
hostPath:
path: /data/pv0001/
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: pv0002
labels:
foo: foo
bar: bar
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 5Gi
hostPath:
path: /data/pv0002/
---
kind: Service
apiVersion: v1
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
selector:
app: nginx
---
kind: PetSet
apiVersion: apps/v1alpha1
metadata:
name: nginx
spec:
serviceName: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: gcr.io/google_containers/nginx-slim:0.8
ports:
- containerPort: 80
volumeMounts:
- name: html
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: html
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
selector:
matchLabels:
foo: foo
bar: bar
Of course, volumes must be available for bounding.
$ kubectl get pvc html-nginx-0
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
html-nginx-0 Bound pv0002 5Gi RWO 1m
$ kubectl get pv
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE
pv0001 5Gi RWO Retain Available 2m
pv0002 5Gi RWO Retain Bound default/html-nginx-0 2m