I'm experiencing a strange problem using k8s 1.3.2 on GCE. I have a 100GB disk set up, and a valid (and Bound) PersistentVolume. However, my PersistentVolumeClaim is showing up with a capacity of 0, even though its status is Bound, and the pod that is trying to use it is stuck in ContainerCreating.
Hopefully the outputs from kubectl below summarise the problem:
$ gcloud compute disks list
NAME ZONE SIZE_GB TYPE STATUS
disk100-001 europe-west1-d 100 pd-standard READY
gke-unrest-micro-pool-199acc6c-3p31 europe-west1-d 100 pd-standard READY
gke-unrest-micro-pool-199acc6c-4q55 europe-west1-d 100 pd-standard READY
$ kubectl get pv
NAME CAPACITY ACCESSMODES STATUS CLAIM REASON AGE
pv-disk100-001 100Gi RWO Bound default/graphite-statsd-claim 2m
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
graphite-statsd-claim Bound pv-disk100-001 0 3m
$ kubectl describe pvc
Name: graphite-statsd-claim
Namespace: default
Status: Bound
Volume: pv-disk100-001
Labels: <none>
Capacity: 0
Access Modes:
$ kubectl describe pv
Name: pv-disk100-001
Labels: <none>
Status: Bound
Claim: default/graphite-statsd-claim
Reclaim Policy: Recycle
Access Modes: RWO
Capacity: 100Gi
Message:
Source:
Type: GCEPersistentDisk (a Persistent Disk resource in Google Compute Engine)
PDName: disk100-001
FSType: ext4
Partition: 0
ReadOnly: false
# Events for pod that is supposed to mount this volume:
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
6h 1m 183 {kubelet gke-unrest-micro-pool-199acc6c-4q55} Warning FailedMount Unable to mount volumes for pod "graphite-statsd-1873928417-i05ef_default(bf9fa0e5-4d8e-11e6-881c-42010af001fe)": timeout expired waiting for volumes to attach/mount for pod "graphite-statsd-1873928417-i05ef"/"default". list of unattached/unmounted volumes=[graphite-data]
6h 1m 183 {kubelet gke-unrest-micro-pool-199acc6c-4q55} Warning FailedSync Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "graphite-statsd-1873928417-i05ef"/"default". list of unattached/unmounted volumes=[graphite-data]
# Extract from deploy yaml file:
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-disk100-001
spec:
capacity:
storage: 100Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
gcePersistentDisk:
pdName: disk100-001
fsType: ext4
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: graphite-statsd-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
---
Any help gratefully received!
Dan, the first issue "PVC capacity 0" looks like a bug. I opened https://github.com/kubernetes/kubernetes/issues/29425 you can track it there.
The second issue sounds like https://github.com/kubernetes/kubernetes/issues/29166 which is currently under investigation. Feel free to add your repro information on there with your logs, and I'll take a look.