I'm running a single master/node Kubernetes in a Virtual Machine, using hostPath as a persistent volume for a deployed Postgres database.
My PersistentVolume has the following configs:
apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
volume.beta.kubernetes.io/storage-class: postgres
labels:
type: local
name: postgres-storage
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 1Gi
hostPath:
path: /data/postgresAlso, I have a PersistentVolumeClaim currently bounded to that volume, requesting all the capacity (spec.resources.requests.storage: 1Gi).
Recently, the Postgres database exceeded the spec.capacity.storage in size, however, without causing any problems:
$ du -hs /data/postgres # Powers of 1024
1.2G /data/postgres
$ du -hs /data/postgres --si # Powers of 1000
1.3G /data/postgres My question is:
spec.capacity.storage really matters when using a hostPath volume, or the volume is in fact caped by the underlying partition size?capacity? (i.e., how Kubernetes will handle this)According to @wongma7 on the Kubernetes GitHub page:
this is working as intended, kube can't/won't enforce the capacity of PVs, the capacity field on PVs is just a label. It's up to the "administrator" i.e. the creator of the PV to label it accurately so that when users create PVCs that needs >= X Gi, they get what they want.
You can find the original discussion here.
Also, it's covered in the official Volume/Resources documentation:
There is no limit on how much space an
emptyDirorhostPathvolume can consume, and no isolation between Containers or between Pods.In the future, we expect that emptyDir and hostPath volumes will be able to request a certain amount of space using a resource specification, and to select the type of media to use, for clusters that have several media types.