What I am trying to achieve is creating a Horizontal Pod Autoscaler able to scale worker
pods according to a custom metric produced by a controller
pod.
I already have Prometheus scraping, Prometheus Adapater, Custom Metric Server fully operational and scaling the worker
deployment with a custom metric my_controller_metric
produced by the worker
pods already works.
Now my worker
pods don't produce this metric anymore, but the controller
does. It seems that the API autoscaling/v1 does not support this feature. I am able to specify the HPA with the autoscaling/v2beta1 API if necessary though.
Here is my spec for this HPA:
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: my-worker-hpa
namespace: work
spec:
maxReplicas: 10
minReplicas: 1
scaleTargetRef:
apiVersion: extensions/v1beta1
kind: Deployment
name: my-worker-deployment
metrics:
- type: Object
object:
target:
kind: Deployment
name: my-controller-deployment
metricName: my_controller_metric
targetValue: 1
When the configuration is applied with kubectl apply -f my-worker-hpa.yml
I get the message:
horizontalpodautoscaler "my-worker-hpa" configured
Though this message seems to be OK, the HPA does not work. Is this spec malformed?
As I said, the metric is available in the Custom Metric Server with a kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1" | jq . | grep my_controller_metric
.
This is the error message from the HPA:
Type Status Reason Message
---- ------ ------ -------
AbleToScale True SucceededGetScale the HPA controller was able to get the target's current scale
ScalingActive False FailedGetObjectMetric the HPA was unable to compute the replica count: unable to get metric my_controller_metric: Deployment on work my-controller-deployment/unable to fetch metrics from custom metrics API: the server could not find the metric my_controller_metric for deployments
Thanks!
In your case problem is HPA configuration: spec.metrics.object.target
should also specify API version. Putting apiVersion: extensions/v1beta1
under spec.metrics.object.target
should fix it.
In addition, there is an open issue about better config validation in HPA: https://github.com/kubernetes/kubernetes/issues/60511