Have followed the documentation on how to setup k8s on aws including
Keep getting the following and I am unsure of where to find logs to see the underlying error that is making the AWS query fail.
This is how error looks:
Failed to provision volume with StorageClass "gp2": error querying for all zones: no instances returned
I faced the same issue and found the solution. I hope this applies to your issue as well.
So every EC2 instance that is a node in your kubernetes cluster should have a tag kubernetes.io/cluster/CLUSTERNAME = owned
When you request to create a new persistentstoragevolume kubernetes will request this from AWS. AWS will then check in which AZs you have worked nodes so it doesn't create the volume in a AZ where there are no nodes. It seem to be doing this by listing all EC2 instances with the tag kubernetes.io/cluster/CLUSTERNAME = owned
But if you have changed or removed this tag, so that it no longer match you cluster name, you will get the exact error message you got here. Lets say you changed it to kubernetes.io/cluster/CLUSTERNAME-default = owned
That would trigger the issue.