I am experiencing a "special device nount found" error when trying to mount a EBS volume to Kubernetes' pod. Here is the pod's yaml file:
apiVersion: v1
kind: Pod
metadata:
name: test-ebs
spec:
containers:
- image: gcr.io/google_containers/test-webserver
name: test-container
volumeMounts:
- mountPath: /test-ebs
name: test-volume
volumes:
- name: test-volume
# This AWS EBS volume must already exist.
awsElasticBlockStore:
volumeID: aws://us-west-2a/vol-xxxxxxxx
fsType: ext4
After running the pod, the pod status stuck at "ContainerCreating". The "kubectl describe pod" output indicates there's a "device not found" error:
SetUp failed for volume "kubernetes.io/aws-ebs/8e830149-9c95-11e6-b969-0691ac4fce05-test-volume" (spec.Name: "test-volume") pod "8e830149-9c95-11e6-b969-0691ac4fce05" (UID: "8e830149-9c95-11e6-b969-0691ac4fce05") with: mount failed: exit status 32 Mounting arguments: /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/aws/us-west-2a/vol-xxxxxxxx /var/lib/kubelet/pods/8e830149-9c95-11e6-b969-0691ac4fce05/volumes/kubernetes.io~aws-ebs/test-volume [bind]
Output: mount: special device /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/aws/us-west-2a/vol-xxxxxxxx does not exist
Anyone knows how this happens? Thanks in advance.
Your volumeID should be just "vol-xxxxxxxx" not "aws://us-west-2a/vol-xxxxxxxx". Kubernetes figures out the region based on the cluster cloud provider settings.