Cluster Autoscaling on AWS not scaling

7/6/2018

Stumped on this issue and hoping someone who knows more can help me.

Trying to follow a guide with a proof of concept for cluster autoscaling on AWS for Kubernetes (https://renzedevries.wordpress.com/2017/01/10/autoscaling-your-kubernetes-cluster-on-aws/). I built my cluster on AWS using kops following this guide (https://medium.com/containermind/how-to-create-a-kubernetes-cluster-on-aws-in-few-minutes-89dda10354f4).

The issue appears to be with the cluster autoscaling deployment. When I run:

kubectl logs cluster-autoscaler-

I get the following output:

I0706 13:26:36.338072       1 leaderelection.go:210] failed to renew 
lease kube-system/cluster-autoscaler
I0706 13:26:38.776977       1 leaderelection.go:210] failed to renew 
lease kube-system/cluster-autoscaler
I0706 13:26:43.119763       1 leaderelection.go:210] failed to renew 
lease kube-system/cluster-autoscaler
I0706 13:26:47.116439       1 leaderelection.go:210] failed to renew 

I've been looking into the error and it was to do with the namespace. When I run the pod in a different namespace or the namespace recommended (kube-system - https://github.com/kubernetes/contrib/issues/2402) I still get the same error. Not sure what it causing it.

Thanks in advance for the help!

-- Hobgob
amazon-web-services
kubernetes

1 Answer

12/12/2018

Follow this guide here to configure Cluster Autoscaler for Kubernetes running in AWS.

Configure Cluster Autoscaler in Kubernetes

It should do the work without any errors.

-- Samrat Priyadarshi
Source: StackOverflow