Kubernetes deployment not scaling down even though usage is below threshold

3/31/2020

I’m having a hard time understanding what’s going on with my horizontal pod autoscaler.

I’m trying to scale up my deployment if the memory or cpu usage goes above 80%.

Here’s my HPA template:

apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
  name: my-hpa
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: my-deployment
  minReplicas: 2
  maxReplicas: 10
  metrics:
    - type: Resource
      resource:
        name: cpu
        target:
          type: Utilization
          averageUtilization: 80
    - type: Resource
      resource:
        name: memory
        target:
          type: Utilization
          averageUtilization: 80

The thing is, it’s been sitting at 3 replicas for days even though the usage is below 80% and I don’t understand why.

$ kubectl get hpa --all-namespaces

NAMESPACE        NAME             REFERENCE                  TARGETS            MINPODS   MAXPODS   REPLICAS   AGE
my-ns            my-hpa           Deployment/my-deployment   61%/80%, 14%/80%   2         10        3          2d15h

Here’s the output of the top command:

$ kubectl top pods

NAME                             CPU(cores)   MEMORY(bytes)   
my-deployment-86874588cc-chvxq   3m           146Mi           
my-deployment-86874588cc-gkbg9   5m           149Mi           
my-deployment-86874588cc-nwpll   7m           149Mi   

Each pod consumes approximately 60% of their requested memory (So they are below the 80% target):

resources:
  requests:
    memory: "256Mi"
    cpu: "100m"
  limits:
    memory: "512Mi"
    cpu: "200m"

Here's my deployment:

kind: Deployment
apiVersion: apps/v1
metadata:
  name: my-deployment
  labels:
    app: my-app
spec:
  replicas: 2
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
        - name: my-app
          image: ...
          imagePullPolicy: Always
          resources:
            requests:
              memory: "256Mi"
              cpu: "100m"
            limits:
              memory: "512Mi"
              cpu: "200m"
          livenessProbe:
            httpGet:
              path: /liveness
              port: 3000
            initialDelaySeconds: 10
            periodSeconds: 3
            timeoutSeconds: 3
          readinessProbe:
            httpGet:
              path: /readiness
              port: 3000
            initialDelaySeconds: 10
            periodSeconds: 3
            timeoutSeconds: 3
          ports:
            - containerPort: 3000
              protocol: TCP

I manually scale down to 2 replicas and it goes back up to 3 right away for no reason:

Normal   SuccessfulRescale             28s (x4 over 66m)    horizontal-pod-autoscaler  New size: 3; reason:

Anyone have any idea what’s going on?

-- Etienne Martin
autoscaling
horizontal-pod-autoscaling
kubernetes
kubernetes-pod

0 Answers