I have a 3 node coros kubernetes cluster up and running.
I want to use persitentvolumes(pv) from a standalone NFS Server.
nfs.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: kube1
spec:
capacity:
storage: 9.5G
accessModes:
- ReadWriteMany
nfs:
path: /mnt/nfs/kube1
server: 10.3.0.3
claim.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc2-1
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1G
kubecfg get pv
kube1 <none> 9500M RWX Released default/pvc2-1
kubecfg get pvc
pvc2-1 <none> Bound kube1 9500M RWX
So why is the pvc created with the full capacity of pv? As I assumed that pvc is just a part of pv, otherwise it's pretty useless.
Regards
cdpb
So why is the pvc created with the full capacity of pv? As I assumed that pvc is just a part of pv, otherwise it's pretty useless.
It's not useless, it's designed to claim the persistent volume. requests
says 'I need at least this much storage', just like it does for compute for pods.
If you had multiple persistent volumes, this is clearer: the pvc won't get a pv of <1G, but would get this 9.5G pv (or another of sufficient size).
If you want to dynamically provision a specific storage size, you should create a Storage Class, backed by a volume that supports it. If you want to use NFS, the in-tree plugin doesn't, but there is one in kubernetes-incubator that does.
As far as I've seen, that's the way it should work. The claim is for the entire volume. The part that confused me at first as well, was the resources.requests.storage value is only a minimum value that claim requires. I use this with Ceph, and when Pods bind to the block device, they take the whole volume.