Horizontal pod autoscaling in Kubernetes

5/22/2018

I have a cluster that scales based on the CPU usage of my pods. The documentation states that i should prevent thrashing by scaling to fast. I want to play around with the autoscaling speed but i can't seem to find where to apply the following flags:

  • --horizontal-pod-autoscaler-downscale-delay
  • --horizontal-pod-autoscaler-upscale-delay

My goal is to set the cooldown timer lower then 5m or 3m, does anyone know how this is done or where I can find documentation on how to configure this? Also if this has to be configured in the hpa autoscaling YAML file, does anyone know what definition should be used for this or where I can find documentation on how to configure the YAML? This is a link to the Kubernetes documentation about scaling cooldowns i used.

-- Dimitrih
google-kubernetes-engine
kubectl
kubernetes

3 Answers

10/25/2019

if you setup cluster using kubeadm then add those parameters to kubeadm master configuration file under controllerManagerExtraArgs. sample is given below

apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
kubernetesVersion: ${k8s_version}
cloudProvider: vsphere
api:
  advertiseAddress: ${k8s_master_ip}
  controlPlaneEndpoint: ${k8s_master_lb_hostname}
apiServerCertSANs:
  - ${k8s_master_lb_ip}
  - ${k8s_master_ip0}
  - ${k8s_master_ip1}
  - ${k8s_master_ip2}
  - ${k8s_master_lb_hostname}
apiServerExtraArgs:
  endpoint-reconciler-type: lease
  requestheader-allowed-names:
  cloud-config: /etc/vsphere/config
  enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority"
controllerManagerExtraArgs:
  horizontal-pod-autoscaler-use-rest-clients: true
  horizontal-pod-autoscaler-downscale-delay: 5m0s
  horizontal-pod-autoscaler-upscale-delay: 2m0s
etcd:
  endpoints:
  - https://${k8s_master_ip0}:2379
  - https://${k8s_master_ip1}:2379
  - https://${k8s_master_ip2}:2379
  caFile: /etc/kubernetes/pki/etcd/ca.pem
  certFile: /etc/kubernetes/pki/etcd/client.pem
  keyFile: /etc/kubernetes/pki/etcd/client-key.pem

..
..
..
-- P Ekambaram
Source: StackOverflow

5/22/2018

The HPA controller is part of the controller manager and you'll need to pass the flags to it, see also the docs. It is not something you'd do via kubectl. It's part of the control plane (master) so depends on how you set up Kubernetes and/or which offering you're using. For example, in GKE the control plane is not accessible, in Minikube you'd ssh into the node, etc.

-- Michael Hausenblas
Source: StackOverflow

6/12/2018

As per all the discussion over here is my experience and it's working for me, may be it can help someone.

ssh to master node and edit /etc/kubernetes/manifests/kube-controller-manager.manifest like below

command:
- /hyperkube
- controller-manager
- --kubeconfig=/etc/kubernetes/kube-controller-manager-kubeconfig.yaml
- --leader-elect=true
- --service-account-private-key-file=/etc/kubernetes/ssl/service-account-key.pem
- --root-ca-file=/etc/kubernetes/ssl/ca.pem
- --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem
- --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem
- --enable-hostpath-provisioner=false
- --node-monitor-grace-period=40s
- --node-monitor-period=5s
- --pod-eviction-timeout=5m0s
- --profiling=false
- --terminated-pod-gc-threshold=12500
- --horizontal-pod-autoscaler-downscale-delay=2m0s
- --horizontal-pod-autoscaler-upscale-delay=2m0s
- --v=2
- --use-service-account-credentials=true
- --feature-gates=Initializers=False,PersistentLocalVolumes=False,VolumeScheduling=False,MountPropagation=False

The quoted part is the parameters I have added. without restarting the kubelet service it's updated.

If you don't find this value updated you can restart systemctl restart kubelet.

Note : I have created HA-cluster using kubespray

Hope this can be savior for someone.

Thank you!

-- chintan thakar
Source: StackOverflow