I have question about scaling rules for service processing tasks form queue. Let's say we have very simple and common situation:
Scaling for service B with keda looks like simple solution:
But in real world we have number of replicas jumping around 5 up and down all time, because of such flow:
Is it common problem. Is it possible to solve it?
Currently best that we can develop with keda is:
spec:
advanced:
horizontalPodAutoscalerConfig:
behavior:
scaleDown:
policies:
- periodSeconds: 600
type: Pods
value: 1
stabilizationWindowSeconds: 1200
scaleUp:
policies:
- periodSeconds: 1200
type: Pods
value: 1
stabilizationWindowSeconds: 1
maxReplicaCount: 10
minReplicaCount: 1
pollingInterval: 30
scaleTargetRef:
envSourceContainerName: celery
name: celery
triggers:
- metadata:
queueName: celery
listLength: "2"
type: redis