this is getting out of hand... have good specs of GKE, yet, I'm getting timeout for mount paths, I have posted this issue in github, but they said, it would be better if posted in SO. please fix this..
2m 2m 1 {scheduler } Scheduled Successfully assigned mongodb-shard1-master-gp0qa to gke-cluster-1-micro-a0f27b19-node-0p2j
1m 1m 1 {kubelet gke-cluster-1-micro-a0f27b19-node-0p2j} FailedMount Unable to mount volumes for pod "mongodb-shard1-master-gp0qa_default": Could not attach GCE PD "shard1-node1-master". Timeout waiting for mount paths to be created.
1m 1m 1 {kubelet gke-cluster-1-micro-a0f27b19-node-0p2j} FailedSync Error syncing pod, skipping: Could not attach GCE PD "shard1-node1-master". Timeout waiting for mount paths to be created.
It's possible that your GCE service account may not be authorized on your project. Try re-adding $YOUR_PROJECT_NUMBER-compute@developer.gserviceaccount.com
as "Can-edit" on the Permissions page of the Developers Console.
this is an old question, but I like to share how I fixed the problem. I manually un-mount the problematic disks from its host via the google cloud console.
I ran into this recently, and the issue ended up being that the application running inside the docker container was actually shutting down immediately - this caused gce to try and restart it, but it would fail when GCE tried to attach the disk (already attached).
So, seems like a bit of a bug in GCE, but don't run down the rabbit hole trying to figure that out, I ended up running things locally and debugging the crash using local volume mounts.
This problem has been documented several times, for example here https://github.com/kubernetes/kubernetes/issues/14642. Kubernetes v1.3.0 should have a fix.
As a workaround (in GCP) you can restart your VMs.
Hope this helps!