I am trying to auto-scale my redis workers based on queue size, I am collecting the metrics using redis_exporter
and promethues-to-sd
sidecars in my redis deployment as so:
spec:
containers:
- name: master
image: redis
env:
- name: MASTER
value: "true"
ports:
- containerPort: 6379
resources:
limits:
cpu: "100m"
requests:
cpu: "100m"
- name: redis-exporter
image: oliver006/redis_exporter:v0.21.1
env:
ports:
- containerPort: 9121
args: ["--check-keys=rq*"]
resources:
requests:
cpu: 100m
memory: 100Mi
- name: prometheus-to-sd
image: gcr.io/google-containers/prometheus-to-sd:v0.9.2
command:
- /monitor
- --source=:http://localhost:9121
- --stackdriver-prefix=custom.googleapis.com
- --pod-id=$(POD_ID)
- --namespace-id=$(POD_NAMESPACE)
- --scrape-interval=15s
- --export-interval=15s
env:
- name: POD_ID
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.uid
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
resources:
requests:
cpu: 100m
memory: 100Mi
I can then view the metric (redis_key_size) in Metrics Explorer as:
metric.type="custom.googleapis.com/redis_key_size"
resource.type="gke_container"
(I CAN'T view the metric if I change resource.type=k8_pod
)
However I can't seem to get the HPA to read in these metrics getting a failed to get metrics error
, and can't seem to figure out the correct Object
definition.
I've tried both .object.target.kind=Pod
and Deployment
, with deployment I get the additional error "Get namespaced metric by name for resource \"deployments\"" is not implemented
.
I don't know if this issue is related to the resource.type="gke_container"
and how to change that?
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: {{ template "webapp.backend.fullname" . }}-workers
namespace: default
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ template "webapp.backend.fullname" . }}-workers
minReplicas: 1
maxReplicas: 4
metrics:
- type: Object
object:
target:
kind: <not sure>
name: <not sure>
metricName: redis_key_size
targetValue: 4
--- Update ---
This works if I use kind: Pod
and manually set name
to the pod name created by the deployment, however this is far from perfect.
I also tried this setup using type Pods
, however the HPA says it can't read the metrics horizontal-pod-autoscaler failed to get object metric value: unable to get metric redis_key_size: no metrics returned from custom metrics API
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: {{ template "webapp.backend.fullname" . }}-workers
namespace: default
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ template "webapp.backend.fullname" . }}-workers
minReplicas: 1
maxReplicas: 4
metrics:
- type: Pods
pods:
metricName: redis_key_size
targetAverageValue: 4
As a workaround for deployments it appears that the metrics have to be exported from pods IN the target deployment.
To get this working I had to move the prometheus-to-sd
container to the deployment I wanted to scale and then scrape the exposed metrics from Redis-Exporter in the Redis deployment via the Redis service, exposing 9121 on the Redis service, and changing the CLA for the the prometheus-to-sd
container such that:
- --source=:http://localhost:9121
-> - --source=:http://my-redis-service:9121
and then using the HPA
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: {{ template "webapp.backend.fullname" . }}-workers
namespace: default
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ template "webapp.backend.fullname" . }}-workers
minReplicas: 1
maxReplicas: 4
metrics:
- type: Pods
pods:
metricName: redis_key_size
targetAverageValue: 4