Kubernetes hpa can't get memory metrics (when it is clearly stated)

11/27/2019

I am trying to implement autoscaling of pods in my cluster. I have tried with a "dummy" deployment and hpa, and I didn't have problem. Now, I am trying to integrate it into our "real" microservices and it keeps returning

Conditions:
  Type           Status  Reason                   Message
  ----           ------  ------                   -------
  AbleToScale    True    SucceededGetScale        the HPA controller was able to get the target's current scale
  ScalingActive  False   FailedGetResourceMetric  the HPA was unable to compute the replica count: missing request for memory
Events:
  Type     Reason                        Age                   From                       Message
  ----     ------                        ----                  ----                       -------
  Warning  FailedGetResourceMetric       18m (x5 over 19m)     horizontal-pod-autoscaler  unable to get metrics for resource memory: no metrics returned from resource metrics API
  Warning  FailedComputeMetricsReplicas  18m (x5 over 19m)     horizontal-pod-autoscaler  failed to get memory utilization: unable to get metrics for resource memory: no metrics returned from resource metrics API
  Warning  FailedComputeMetricsReplicas  16m (x7 over 18m)     horizontal-pod-autoscaler  failed to get memory utilization: missing request for memory
  Warning  FailedGetResourceMetric       4m38s (x56 over 18m)  horizontal-pod-autoscaler  missing request for memory

Here is my hpa:

apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
 name: #{Name}
 namespace: #{Namespace}
spec:
 scaleTargetRef:
   apiVersion: apps/v1beta1
   kind: Deployment
   name: #{Name}
 minReplicas: 2
 maxReplicas: 5
 metrics:
  - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 80
  - type: Resource
    resource:
      name: memory
      target:
        type: Utilization
        averageUtilization: 80

The deployment

apiVersion: apps/v1
kind: Deployment
metadata:
  name: #{Name}
  namespace: #{Namespace}
spec:
  replicas: 2
  selector:
    matchLabels:
      app: #{Name}
  template:
    metadata:
      annotations:
        linkerd.io/inject: enabled
      labels:
        app: #{Name}
    spec:
      containers:
      - name: #{Name}
        image: #{image}
        resources:
          limits:
            cpu: 500m
            memory: "300Mi"
          requests:
            cpu: 100m
            memory: "200Mi"
        ports:
        - containerPort: 80
          name: #{ContainerPort}

I can see both memory and cpu when I do kubectl top pods. I can see the requests and limits as well when I do kubectl describe pod.

    Limits:
      cpu:     500m
      memory:  300Mi
    Requests:
      cpu:     100m
      memory:  200Mi

The only difference I can think of is that my dummy service didn't have linkerd sidecar.

-- shrimpy
kubernetes

1 Answer

11/27/2019

For the HPA to work with resource metrics, every container of the Pod needs to have a request for the given resource (CPU or memory).

It seems that the Linkerd sidecar container in your Pod does not define a memory request (it might have a CPU request). That's why the HPA complains about missing request for memory.

However, you can configure the memory and CPU requests for the Linkerd container with the --proxy-cpu-request and --proxy-memory-request injection flags.

Another possibility is to use these annotations to configure the CPU and memory requests:

  • config.linkerd.io/proxy-cpu-request
  • config.linkerd.io/proxy-memory-request

Defining a memory request with in either of these ways should make the HPA work.

References:

-- weibeld
Source: StackOverflow