I am using openshift with glusterfs as storage system. Dynamic provisioning works very well but always rounds the allocated capacity to the next GB value. E.g.: I request a volume of 400MB but a volume of 1GB is created.
Is this behavior configurable? I setup openshift via the advanced installation with openshift/ansible.
It is how Kubernetes underneath works. Where you have static volumes defined, the allocation request is used to grab the best match available. So if there isn't one of the exact size, it will grab the next size up. It isn't able to split up a persistent volume and just give part of it to you. It also doesn't enforce any limit, so although you request 400MB, you will be able to use up to the 1GB the persistent volume provides.
If you are trying to be economical with storage space, provided storage is of type ReadWriteMany, you could use one persistent volume claim for multiple applications, by specifying that a sub path from the volume should be mounted in each case, into the respective containers. Just realise there is no quota to prevent one application from using all the storage from the persistent volume up, so be careful for example with sharing a persistent volume with a database and some other application which could run rampant and use all the space, as last thing you want to do is run out of space for the database.
Echoing Graham's reply: yes, it's how Kubernetes works, but the reasons might lie in GlusterFS and/or Heketi. The source code for the provisioner contains following lines:
// GlusterFS/heketi creates volumes in units of GiB.
sz, err := volutil.RoundUpToGiBInt(capacity)
So the answer is apparently: you can't change the rounding unit, since it's hard-coded. I'm missing a bit of explanation whether this restriction is an architectural or a configuration issue, but still, the comment in code pinpoints the problem to GlusterFS/Heketi's allocation algorithms.