cluster-autoscaler doesn't scale up the kubernetes cluster

3/22/2017

I create a k8s cluster by kops on aws, the node auto scaling group configuration like:

metadata:
creationTimestamp: "2017-03-21T03:53:26Z"
name: nodes
spec:
  associatePublicIp: true
  image: kope.io/k8s-1.4-debian-jessie-amd64-hvm-ebs-2016-10-21
  machineType: t2.medium
  maxSize: 5
  minSize: 2
  role: Node
  zones:
  - us-west-1a
  - us-west-1c

the aws console shows current asg like:

desired:2; min:2; max:5

then I install the add-ons --- cluster-autoscaler by using the official doc, then I deploy a pod which current cluster can't supply the resource, but cluster-autoscaler doesn't add the node to the cluster, logs is like below:

admin@ip-10-0-52-252:/var/log$kubectl logs -f cluster-autoscaler-1581749901-fqbzk -n kube-system
I0322 07:08:43.407683       1 event.go:216] Event(api.ObjectReference{Kind:"Pod", Namespace:"default", Name:"new-mq-test-951523717-trd2s", UID:"9437ac54-0ecd-11e7-8779-0257a5d4c012", APIVersion:"v1", ResourceVersion:"189546", FieldPath:""}): type: 'Normal' reason: 'NotTriggerScaleUp' pod didn't trigger scale-up (it wouldn't fit if a new node is added)
I0322 07:08:43.407910       1 event.go:216] Event(api.ObjectReference{Kind:"Pod", Namespace:"default", Name:"new-mq-test-951523717-n986l", UID:"9437a3db-0ecd-11e7-8779-0257a5d4c012", APIVersion:"v1", ResourceVersion:"189543", FieldPath:""}): type: 'Normal' reason: 'NotTriggerScaleUp' pod didn't trigger scale-up (it wouldn't fit if a new node is added)

so why the cluster-autoscaler doesn't scale up the cluster by adding the ec2 nodes ? any answers are very appreciated

-- J.Woo
amazon-web-services
kubernetes

2 Answers

3/23/2017

finally I found the answer, my default kops configuration of nodes asg is t2.medium, while I deploy the pod which require a 5000M for memory, as we all konw t2.medium memory is 4GB which can't fit the requirement, so the cluster-autoscaler can't scale up!

-- J.Woo
Source: StackOverflow

3/22/2017

I faced a similar issue while trying to use Kubernetes autoscaling on Google Container Engine. This normally happens when your cluster doesn't contain enough nodes to accomodate anymore Kubernetes pod.

The solution to this is to enable autoscaling in your cluster (AWS EC2 in your case) along with kubernetes autoscaling.

Checkout the following Kubernetes documentation for autoscaling on AWS for further information: link.

-- Shahbaz Ahmed
Source: StackOverflow