Kubernetes horizontal pod autoscaler not creating replicas according to replica count

4/20/2018

Here I am trying to deploy a dockerized web service through helm chart in kubernetes custom cluster(created through kubeadm).So when it is getting autoscaled , it is not creating replicas according to replica count.

This is my deployment file.

apiVersion: apps/v1beta2
kind: Deployment
metadata:
  name: {{ template "demochart.fullname" . }}
  labels:
    app: {{ template "demochart.name" . }}
    chart: {{ template "demochart.chart" . }}
    release: {{ .Release.Name }}
    heritage: {{ .Release.Service }}
spec:
  replicas: {{ .Values.replicaCount }}
  selector:
    matchLabels:
      app: {{ template "demochart.name" . }}
      release: {{ .Release.Name }}
  template:
    metadata:
      labels:
        app: {{ template "demochart.name" . }}
        release: {{ .Release.Name }}
    spec:
      containers:
        - name: {{ .Chart.Name }}
          image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
          imagePullPolicy: {{ .Values.image.pullPolicy }}
          ports:
            - name: http
              containerPort: 80
          volumeMounts:
            - name: cred-storage
              mountPath: /root/
          resources:
{{ toYaml .Values.resources | indent 12 }}
    {{- with .Values.nodeSelector }}
      nodeSelector:
{{ toYaml . | indent 8 }}
    {{- end }}
    {{- with .Values.affinity }}
      affinity:
{{ toYaml . | indent 8 }}
    {{- end }}
    {{- with .Values.tolerations }}
      tolerations:
{{ toYaml . | indent 8 }}
    {{- end }}
      volumes:
        - name: cred-storage
          hostPath:
            path: /home/aodev/
            type:

Here is the values.yaml

replicaCount: 3

image:
  repository: REPO_NAME
  tag: latest
  pullPolicy: IfNotPresent

service:
  type: NodePort
  port: 8007

ingress:
  enabled: false
  annotations: {}
    # kubernetes.io/ingress.class: nginx
    # kubernetes.io/tls-acme: "true"
  path: /
  hosts:
    - chart-example.local
  tls: []
  #  - secretName: chart-example-tls
  #    hosts:
  #      - chart-example.local

resources: 
  # We usually recommend not to specify default resources and to leave this as a conscious
  # choice for the user. This also increases chances charts run on environments with little
  # resources, such as Minikube. If you do want to specify resources, uncomment the following
  # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
  limits:
    cpu: 1000m
    memory: 2000Mi
  requests:
    cpu: 1000m
    memory: 2000Mi

nodeSelector: {}

tolerations: []

affinity: {}

Here are my running pods which includes heapster and metrics server as well as my webservice.

kubectl get pods before autoscaling

Below is the hpa file

apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
 annotations:
 name: entitydetection
 namespace: kube-system
spec:
  maxReplicas: 20
  minReplicas: 5
  scaleTargetRef:
    apiVersion: apps/v1beta2
    kind: Deployment
    name: entitydetection
  targetCPUUtilizationPercentage: 50

So I gave replica count as 3 in deployment and minReplicas as 5 and maxReplicas as 20, targetCPUUtilization as 50% in hpa. So when cpu utilization is exceeding 50% it is randomly creating replicas and not according to replica count.

So below 2 replicas are created when CPU exceeded 50% which are having 36s age .It should ideally create 3 replicas .What is the problem?

kubectl get pods after autoscaling

-- Pranita Kadge
kubernetes
kubernetes-helm

1 Answer

4/20/2018

Here is the quote from the HPA design documentation:

The autoscaler is implemented as a control loop. It periodically queries pods described by Status.PodSelector of Scale subresource, and collects their CPU utilization.

Then, it compares the arithmetic mean of the pods' CPU utilization with the target defined in Spec.CPUUtilization, and adjusts the replicas of the Scale if needed to match the target (preserving condition: MinReplicas <= Replicas <= MaxReplicas).

CPU utilization is the recent CPU usage of a pod (average across the last 1 minute) divided by the CPU requested by the pod.

The target number of pods is calculated from the following formula:

TargetNumOfPods = ceil(sum(CurrentPodsCPUUtilization) / Target)    

Starting and stopping pods may introduce noise to the metric (for instance, starting may temporarily increase CPU). So, after each action, the autoscaler should wait some time for reliable data. Scale-up can only happen if there was no rescaling within the last 3 minutes. Scale-down will wait for 5 minutes from the last rescaling.

So, HPA spawns a minimum number of pods which can solve the current loads.

-- Anton Kostenko
Source: StackOverflow