kubernetes provisioner for pv in a statefulset with aws-ebs pv issue

8/26/2019

Have followed the documentation on how to setup k8s on aws including

  • Add the provider=aws
  • Make sure the Nodes have correct IAM permissions

Keep getting the following and I am unsure of where to find logs to see the underlying error that is making the AWS query fail.

This is how error looks:

Failed to provision volume with StorageClass "gp2": error querying for all zones: no instances returned
--
kubernetes
persistent-volumes

1 Answer

11/1/2019

I faced the same issue and found the solution. I hope this applies to your issue as well.

So every EC2 instance that is a node in your kubernetes cluster should have a tag kubernetes.io/cluster/CLUSTERNAME = owned

When you request to create a new persistentstoragevolume kubernetes will request this from AWS. AWS will then check in which AZs you have worked nodes so it doesn't create the volume in a AZ where there are no nodes. It seem to be doing this by listing all EC2 instances with the tag kubernetes.io/cluster/CLUSTERNAME = owned

But if you have changed or removed this tag, so that it no longer match you cluster name, you will get the exact error message you got here. Lets say you changed it to kubernetes.io/cluster/CLUSTERNAME-default = owned

That would trigger the issue.

-- Johnathan
Source: StackOverflow