In the documentation, I found that the following flag should be applied on kube-controller-manager
to solve my problem:
--horizontal-pod-autoscaler-use-rest-clients=1m0s
But how can I apply this flag on kube-controller-manager
? I don't understand, since it is not YAML based setting and the only thing I have on my local machine is kubectl
& oc
CLI tools.
The kube-controller-manager
runs in your K8s control plane. So you will have to add that flag on the servers where your control plane runs. Typically, this is an uneven number of server (one is the master) 3 or 5 due to the fact that it's the recommended quorum. (Example using kubeadm).
So typically the kube-controller-manager
configs live under /etc/kubernetes/manifests
in your masters. The file name typically is kube-controller-manager.yaml
and the content can be changed to something like this:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
component: kube-controller-manager
tier: control-plane
name: kube-controller-manager
namespace: kube-system
spec:
containers:
- command:
- kube-controller-manager
- --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
- --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
- --bind-address=127.0.0.1
- --client-ca-file=/var/lib/minikube/certs/ca.crt
- --cluster-signing-cert-file=/var/lib/minikube/certs/ca.crt
- --cluster-signing-key-file=/var/lib/minikube/certs/ca.key
- --controllers=*,bootstrapsigner,tokencleaner
- --kubeconfig=/etc/kubernetes/controller-manager.conf
- --leader-elect=true
- --requestheader-client-ca-file=/var/lib/minikube/certs/front-proxy-ca.crt
- --root-ca-file=/var/lib/minikube/certs/ca.crt
- --service-account-private-key-file=/var/lib/minikube/certs/sa.key
- --use-service-account-credentials=true
- --horizontal-pod-autoscaler-use-rest-clients=1m0s <== add this line
image: k8s.gcr.io/kube-controller-manager:v1.16.2
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 8
httpGet:
host: 127.0.0.1
path: /healthz
port: 10252
scheme: HTTP
initialDelaySeconds: 15
timeoutSeconds: 15
name: kube-controller-manager
resources:
requests:
cpu: 200m
volumeMounts:
- mountPath: /etc/ssl/certs
name: ca-certs
readOnly: true
- mountPath: /var/lib/minikube/certs
name: k8s-certs
readOnly: true
- mountPath: /etc/kubernetes/controller-manager.conf
name: kubeconfig
readOnly: true
- mountPath: /usr/share/ca-certificates
name: usr-share-ca-certificates
readOnly: true
hostNetwork: true
priorityClassName: system-cluster-critical
volumes:
- hostPath:
path: /etc/ssl/certs
type: DirectoryOrCreate
name: ca-certs
- hostPath:
path: /var/lib/minikube/certs
type: DirectoryOrCreate
name: k8s-certs
- hostPath:
path: /etc/kubernetes/controller-manager.conf
type: FileOrCreate
name: kubeconfig
- hostPath:
path: /usr/share/ca-certificates
type: DirectoryOrCreate
name: usr-share-ca-certificates
status: {}
Then you need to restart your kube-controller-manager.
This could vary depending on what you are running in your masters. If something like docker you can do sudo systemctl restart docker
or you might need to restart containerd if you are using it instead of docker systemctl restart containerd
Or if you want to just start the kube-controller-manager
you can do docker restart kube-controller-mamnager
or crictl stop kube-controller-manager; crictl start kube-controller-manager