HPA could not get CPU metric during GKE node auto-scaling

8/23/2019

Cluster information:

  • Kubernetes version: 1.12.8-gke.10
  • Cloud being used: GKE
  • Installation method: gcloud
  • Host OS: (machine type) n1-standard-1
  • CNI and version: default
  • CRI and version: default

During node scaling, HPA couldn't get CPU metric.

At the same time, kubectl top pod and kubectl top node output is: Error from server (ServiceUnavailable): the server is currently unable to handle the request (get pods.metrics.k8s.io) Error from server (ServiceUnavailable): the server is currently unable to handle the request (get nodes.metrics.k8s.io)

For more details, I'll show you the flow of my problem occurs:

  1. Suddenly many requests arrive at the GKE server. (Using testing tool)
  2. HPA detects current CPU usage above target CPU usage(50%), thus try pod scale up incrementally.
  3. Insufficient CPU warning occurs when creating pods, thus GKE try node scalie up incrementally.
  4. Soon the HPA fails to get the metric, and kubectl top node or kubectl top pod doesn’t get a response. - At this time one or more OutOfcpu pods are found, and several pods are in
    ContainerCreating (from Pending state).
  5. After node scale-up is complete and some time has elapsed (about a few minutes), HPA starts to fetch the CPU metric successfully and try to scale up/down based on metric.
  6. Same situation happens when node scale down.

This causes pod scaling to stop and raises some failures on responding to client’s requests. Is this normal?

I think HPA should get CPU metric(or other metrics) on running pods even during node scaling, to keep track of the optimal pod size at the moment. So when node scaling done, HPA create the necessary pods at once (rather than incrementally).

Can I make my cluster work like this?

-- isbee
google-kubernetes-engine
kubernetes

1 Answer

8/23/2019

Maybe your node runs out of one resource either memory or cpu, there are config maps that describe how addons are scaled depending on the cluster size. You need to edit metrics-server-config config map in kube-system namespace:

kubectl edit cm/metrics-server-config -n kube-system

you should add

baseCPU
cpuPerNode
baseMemory
memoryPerNode

to NannyConfiguration, here you can find extensive manual:

Also heapster suffers from the same OOM issue: too many pods to handle all metrics within assigned resources please modify heapster's config map in accordingly:

kubectl edit cm/heapster-config -n kube-system
-- AdolfoOG
Source: StackOverflow