Node has no available volume zone in AWS EKS

1/30/2019

Trying to create pod but getting following error:

0/3 nodes are available: 1 node(s) had no available volume zone.

I tried to attach more volume but still the error is same.

Warning FailedScheduling 2s (x14 over 42s) default-scheduler 0/3 nodes are available: 1 node(s) had no available volume zone, 2 node(s) didn't have free ports for the requested pod ports.

-- Madhurima Mishra
amazon-eks
kubernetes

1 Answer

5/2/2019

My problem was that the AWS EC2 Volume and Kubernetes PersistentVolume (PV) state got somehow out of sync / corrupted. Kubernetes believed there was a bound PV while the EC2 Volume showed as "available", not mounted to a worker node. Update: The volume was in a different avail. zone then either of the 2 EC2 nodes and thus could not be attached to them.

The solution was to delete all relevant resources - StatefulSet, PVC (crucial!), PV. Then I was able to apply them again and Kubernetes succeeded in creating a new EC2 Volume and attaching it to the instance.

As you can see in my configuration, I have a StatefulSet with a "volumeClaimTemplate" (=> PersistentVolumeClaim, PVC) (and a matching StorageClass definition) so Kubernetes should dynamically provision an EC2 Volume, attach it to a worker and expose it as a PersistentVolume.

See kubectl get pvc, kubectl get pv and in the AWS Console - EC2 - Volumes.

NOTE: "Bound" = the PV is bound to the PVC.

Here is a description of a laborious way to restore a StatefulSet on AWS if you have a snapshot of the EBS volume (5/2018): https://medium.com/@joatmon08/kubernetes-statefulset-recovery-from-aws-snapshots-8a6159cda6f1

-- Jakub HolĂ˝
Source: StackOverflow