I deployed a helm chart (statefulSet) with 1 pod and 2 containers, one of the containers has a PV (readwriteonce) attached. On upgrade, it takes 30 mins (7 failed tries) to go up again (so the service is down for 30mins)
Some context:
Relevant sections of the yaml file:
volumeMounts:
- mountPath: /app/data
name: prod-data
volumeClaimTemplates:
- metadata:
creationTimestamp: null
name: prod-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Gi
storageClassName: standard
volumeMode: Filesystem
The error msg:
Unable to mount volumes for pod "foo" timeout expired waiting for volumes to attach or mount for pod "foo". list of unmounted volumes=[foo] list of unattached volumes [foo default-token-foo]
The additional context, this is what happens after triggering the StatefulSet upgrade:
Nothing changed yet
Name: prod-data-prod-0
Namespace: prod
StorageClass: standard
Status: Bound
Volume: pvc-16f49d12-f644-11e9-952a-4201ac100008
Labels: app=prod
release=prod
Annotations: pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/gce-pd
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 500Gi
Access Modes: RWO
VolumeMode: Filesystem
Mounted By: prod-0
Events: <none>
Then first error
Unable to mount volumes for pod "prod-0_prod(89fb0cf5-0008-11ea-b349-4201ac100009)": timeout expired waiting for volumes to attach or mount for pod "prod"/"prod-0". list of unmounted volumes=[prod-data]. list of unattached volumes=[prod-data default-token-4624v]
Still same describe
Name: prod-data-prod-0
Namespace: prod
StorageClass: standard
Status: Bound
Volume: pvc-16f49d12-f644-11e9-952a-4201ac100008
Labels: app=prod
release=prod
Annotations: pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/gce-pd
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 500Gi
Access Modes: RWO
VolumeMode: Filesystem
Mounted By: prod-0
Events: <none>
After the 2nd failed mount this is the pod description
Conditions:
Type Status
Initialized False
Ready False
ContainersReady False
PodScheduled True
Volumes:
vlapi-prod-data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: prod-data-prod-0
ReadOnly: false
default-token-4624v:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-4624v
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
FailedMount nr 3 no change to pvc description events as described by pod
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 8m44s default-scheduler Successfully assigned prod/prod-0 to gke-vlgke-a-default-pool-312c60b0-p8lb
Warning FailedMount 2m8s (x3 over 6m41s) kubelet, gke-vlgke-a-default-pool-312c60b0-p8lb Unable to mount volumes for pod "prod-0_prod(89fb0cf5-0008-11ea-b349-4201ac100009)": timeout expired waiting for volumes to attach or mount for pod "prod"/"prod-0". list of unmounted volumes=[prod-data]. list of unattached volumes=[prod-data default-token-4624v]
Warning FailedMount 48s (x4 over 7m38s) Warning FailedMount 13s (x5 over 9m17s)
Name: pvc-16f49d12-f644-11e9-952a-4201ac100008
Labels: failure-domain.beta.kubernetes.io/region=europe-west1
failure-domain.beta.kubernetes.io/zone=europe-west1-d
Annotations: kubernetes.io/createdby: gce-pd-dynamic-provisioner
pv.kubernetes.io/bound-by-controller: yes
pv.kubernetes.io/provisioned-by: kubernetes.io/gce-pd
Finalizers: [kubernetes.io/pv-protection]
StorageClass: standard
Status: Bound
Claim: prod/prod-data-prod-0
Reclaim Policy: Retain
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 500Gi
Node Affinity:
Required Terms:
Term 0: failure-domain.beta.kubernetes.io/zone in [europe-west1-d]
failure-domain.beta.kubernetes.io/region in [europe-west1]
Message:
Source:
Type: GCEPersistentDisk (a Persistent Disk resource in Google Compute Engine)
PDName: gke-vlgke-a-0d42343f-d-pvc-16f49d12-f644-11e9-952a-4201ac100008
FSType: ext4
Partition: 0
ReadOnly: false
FailedMount 47s (x6 over 12m) FailedMount 11s (x7 over 13m) FailedMount 33s (x8 over 16m) FailedMount 9s (x9 over 18m) FailedMount 0s (x10 over 20m) ~2m between FailedMount timeouts
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 24m default-scheduler Successfully assigned prod/prod-0 to gke-vlgke-a-default-pool-312c60b0-p8lb
Warning FailedMount 2m4s (x10 over 22m) kubelet, gke-vlgke-a-default-pool-312c60b0-p8lb Unable to mount volumes for pod "prod-0_prod(89fb0cf5-0008-11ea-b349-4201ac100009)": timeout expired waiting for volumes to attach or mount for pod "prod"/"prod-0". list of unmounted volumes=[prod-data]. list of unattached volumes=[prod-data default-token-4624v]
Normal Pulling 11s kubelet, gke-gke-default-pool-312c60b0-p8lb Pulling image "gcr.io/foo-251818/`foo:2019-11-05"
11th try to mount worked no change I could catch on the PVC description
One possibility is that your pod has its spec.securityContext.runAsUser and spec.securityContext.fsGroup different than 0 (non root) and k8s would try to change file access for all files on the volume which takes some time. Try setting them in your pod definition to
spec:
securityContext:
runAsUser: 0
fsGroup: 0
Other possibility might include mismatch of attributes (Access modes, capacity) between PVC and PV. Also, raising multiple pods with RWO PVCs, might create contention if you have a single PV of that kind defined.