Cluster autoscaler v1.0.4 kubernetes error

3/16/2018

im getting below error

W0316 22:04:26.025272       1 clusterstate.go:514] Failed to get nodegroup for <nodename>: Wrong id: expected format aws:///<zone>/<name>, got 
W0316 22:04:26.025296       1 clusterstate.go:514] Failed to get nodegroup for <nodename>: Wrong id: expected format aws:///<zone>/<name>, got 
W0316 22:04:26.025303       1 clusterstate.go:514] Failed to get nodegroup for <nodename>: Wrong id: expected format aws:///<zone>/<name>, got 
W0316 22:04:26.025309       1 clusterstate.go:514] Failed to get nodegroup for <nodename>: Wrong id: expected format aws:///<zone>/<name>, got 
W0316 22:04:26.025316       1 clusterstate.go:514] Failed to get nodegroup for <nodename>: Wrong id: expected format aws:///<zone>/<name>, got 
W0316 22:04:26.025324       1 clusterstate.go:514] Failed to get nodegroup for <nodename>: Wrong id: expected format aws:///<zone>/<name>, got 
W0316 22:04:26.025340       1 clusterstate.go:560] Readiness for node group *** not found

    E0316 22:04:02.705833       1 static_autoscaler.go:257] Failed to scale up: failed to build node infos for node groups: Wrong id: expected format aws:///<zone>/<name>, got 

using cluster-autoscasler

https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler
-- shiv455
kubernetes

1 Answer

3/17/2018

That happened because some of your nodes do not have a tag which identifies your node group.

As @Matthew L Daniel mentioned in his comment, it needs a tag on AWS instance for working properly.

Here is from official documentation about how identification works and why:

It is assumed that the underlying cluster is run on top of some kind of node groups. Inside a node group, all machines have identical capacity and have the same set of assigned labels. Thus, increasing a size of a node group will create a new machine that will be similar to those already in the cluster - they will just not have any user-created pods running (but will have all pods run from the node manifest and daemon sets.)

As you can find in installation documentation:

To run a cluster-autoscaler which auto-discovers ASGs with nodes use the --node-group-auto-discovery flag and tag the ASGs with key k8s.io/cluster-autoscaler/enabled and key kubernetes.io/cluster/< YOUR CLUSTER NAME >.

So, just add that tags to your nodes.

Also, you can use as many AWS tags and Kubernetes labels for a node as you want, it will not affect autoscaler.

UPD:

The reason why Autoscaler was not working and crashed on getting ProviderID was in a missed --cloud-provider option value in Kubelet. Addin aws value should fix that kind of issues.

-- Anton Kostenko
Source: StackOverflow