I've configured a kubernetes cluster with metrics-server (as an aggregated apiserver) replacing heapster. kubectl top works fine, as do the raw endpoints in the metrics.k8s.io/v1beta1 api group. HPA, however, does not. controller-manager logs show the following errors (and no others):
E1008 10:45:18.462447 1 horizontal.go:188] failed to compute desired number of replicas based on listed metrics for Deployment/kube-system/nginx: failed to get cpu utilization: missing request for cpu on container nginx in pod kube-system/nginx-64f497f8fd-7kr96
I1008 10:45:18.462511 1 event.go:221] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"kube-system", Name:"nginx", UID:"387f256e-cade-11e8-9cfa-525400c042d5", APIVersion:"autoscaling/v2beta1", ResourceVersion:"3367", FieldPath:""}): type: 'Warning' reason: 'FailedGetResourceMetric' missing request for cpu on container nginx in pod kube-system/nginx-64f497f8fd-7kr96
I1008 10:45:18.462529 1 event.go:221] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"kube-system", Name:"nginx", UID:"387f256e-cade-11e8-9cfa-525400c042d5", APIVersion:"autoscaling/v2beta1", ResourceVersion:"3367", FieldPath:""}): type: 'Warning' reason: 'FailedComputeMetricsReplicas' failed to get cpu utilization: missing request for cpu on container nginx in pod kube-system/nginx-64f497f8fd-7kr96
metrics-server spec:
spec:
containers:
- args:
- --kubelet-preferred-address-types=InternalIP
image: k8s.gcr.io/metrics-server-amd64:v0.3.1
imagePullPolicy: Always
name: metrics-server
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /tmp
name: tmp-dir
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: metrics-server
serviceAccountName: metrics-server
terminationGracePeriodSeconds: 30
volumes:
- emptyDir: {}
name: tmp-dir
controller-manager is running with
--horizontal-pod-autoscaler-use-rest-clients="true"
k8s version 1.11.3
Any ideas?
I will write here, in the comments inconvenient formatting.
Check you proxy-client-cert-file
and proxy-client-key
, open him this command, and check Subject CN:
$ openssl x509 -noout -text -in /etc/kubernetes/ssl/front-proxy-client.pem
Certificate:
Data:
Version: hidden
Serial Number: hidden (hidden)
Signature Algorithm: hidden
Issuer: CN=front-proxy-ca
Validity
Not Before: hidden
Not After : hidden
Subject: CN=front-proxy-client
In my case, Subject CN=front-proxy-client
, this CN i added in kube-apiserver: --requestheader-allowed-names=front-proxy-client
Turns out this was me being stupid (and nothing to do with metrics-server).
I was testing on a deployment where the pod containers did not have any setting for CPU request.