I have a very strange effect with a pod on Kubernetes: It tries to mount a volume of type emptyDir
, but fails to do so. The events list of the pod brings up the following entries:
LAST SEEN FIRST SEEN COUNT NAME KIND SUBOBJECT TYPE REASON SOURCE MESSAGE
2m 10h 281 953fb7fe6825ce398f6243fbe2b2df9400d8cbe0-1530827425856-xbg84.153e978b9b811f46 Pod Warning FailedMount kubelet, ip-172-20-73-118.eu-central-1.compute.internal Unable to mount volumes for pod "953fb7fe6825ce398f6243fbe2b2df9400d8cbe0-1530827425856-xbg84_example(6cfbf40a-809d-11e8-bb05-0227730cc812)": timeout expired waiting for volumes to attach/mount for pod "example"/"953fb7fe6825ce398f6243fbe2b2df9400d8cbe0-1530827425856-xbg84". list of unattached/unmounted volumes=[workspace]
What's strange is that this works most of the times, but now this has happened. What could be the reason for this? And how to figure out in more detail what went wrong?
Update: As requested in a comment, I have added the pod spec here:
apiVersion: v1
kind: Pod
metadata:
name: 953fb7fe6825ce398f6243fbe2b2df9400d8cbe0-1530827425856-xbg84
namespace: example
spec:
containers:
- args:
- --context=/workspace/1b5c4fd2-bb39-4096-b055-52dc99d8da0e
- --dockerfile=/workspace/1b5c4fd2-bb39-4096-b055-52dc99d8da0e/Dockerfile-broker
- --destination=registry.example.com:443/example/953fb7fe6825ce398f6243fbe2b2df9400d8cbe0-broker:1530827425856
image: gcr.io/kaniko-project/executor:732a2864f4c9f55fba71e4afd98f4fdd575479e6
imagePullPolicy: IfNotPresent
name: 953fb7fe6825ce398f6243fbe2b2df9400d8cbe0-1530827425856-broker
volumeMounts:
- mountPath: /kaniko/.docker/config.json
name: config-json
subPath: config.json
- mountPath: /workspace
name: workspace
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-mk89h
readOnly: true
- args:
- --context=/workspace/1b5c4fd2-bb39-4096-b055-52dc99d8da0e
- --dockerfile=/workspace/1b5c4fd2-bb39-4096-b055-52dc99d8da0e/Dockerfile-core
- --destination=registry.example.com:443/example/953fb7fe6825ce398f6243fbe2b2df9400d8cbe0-core:1530827425856
image: gcr.io/kaniko-project/executor:732a2864f4c9f55fba71e4afd98f4fdd575479e6
imagePullPolicy: IfNotPresent
name: 953fb7fe6825ce398f6243fbe2b2df9400d8cbe0-1530827425856-core
volumeMounts:
- mountPath: /kaniko/.docker/config.json
name: config-json
subPath: config.json
- mountPath: /workspace
name: workspace
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-mk89h
readOnly: true
- args:
- --context=/workspace/1b5c4fd2-bb39-4096-b055-52dc99d8da0e
- --dockerfile=/workspace/1b5c4fd2-bb39-4096-b055-52dc99d8da0e/Dockerfile-flows
- --destination=registry.example.com:443/example/953fb7fe6825ce398f6243fbe2b2df9400d8cbe0-flows:1530827425856
image: gcr.io/kaniko-project/executor:732a2864f4c9f55fba71e4afd98f4fdd575479e6
imagePullPolicy: IfNotPresent
name: 953fb7fe6825ce398f6243fbe2b2df9400d8cbe0-1530827425856-flows
volumeMounts:
- mountPath: /kaniko/.docker/config.json
name: config-json
subPath: config.json
- mountPath: /workspace
name: workspace
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-mk89h
readOnly: true
initContainers:
- command:
- sh
- -c
- echo ${CONFIG_JSON} | base64 -d > /config-json/config.json
env:
- name: CONFIG_JSON
value: […]
image: alpine:3.7
imagePullPolicy: IfNotPresent
name: store-config-json
volumeMounts:
- mountPath: /config-json
name: config-json
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-mk89h
readOnly: true
restartPolicy: Never
volumes:
- emptyDir: {}
name: config-json
- name: workspace
persistentVolumeClaim:
claimName: example
- name: default-token-mk89h
secret:
defaultMode: 420
secretName: default-token-mk89h
You don't use emptyDir in podSpec directly. I would suggest replacing PVC with just emptyDir: {}
and checking if it solved things for you.
If you want to still chase the PVC and PV approach, provide their manifests and describes. It's possible that ie. you have a PVC mound to PV with empty dir on a different host that pod started on.