How can I set to use metrics-server to get CPU usage for HPA?
# kubectl top nodes
error: metrics not available yet
# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
heapster-709db6bd48-f2gba 2/2 Running 0 6h
metrics-server-70647b8f8b-99pja 1/1 Running 0 5h
.....
# kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
devops-deployment Deployment/devops-deployment <unknown>/50% 4 10 4 1h
I had the same issue as you. What helped me is this post.
-remove metrics-server
-change metrics-server/deploy/1.8+/metrics-server-deployment.yaml
-apply again
-or simply
Below more details:
kubectl delete -f metrics-server/deploy/1.8+
Edit metrics-server/deploy/1.8+/metrics-server-deployment.yaml
and add next options:
command:
- /metrics-server
- --kubelet-insecure-tls
- --kubelet-preferred-address-types=InternalIP
metrics-server-deployment.yaml should look like below, or simply copy paste it from here
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: metrics-server
namespace: kube-system
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: metrics-server
namespace: kube-system
labels:
k8s-app: metrics-server
spec:
selector:
matchLabels:
k8s-app: metrics-server
template:
metadata:
name: metrics-server
labels:
k8s-app: metrics-server
spec:
serviceAccountName: metrics-server
volumes:
# mount in tmp so we can safely use from-scratch images and/or read-only containers
- name: tmp-dir
emptyDir: {}
containers:
- name: metrics-server
image: k8s.gcr.io/metrics-server-amd64:v0.3.1
imagePullPolicy: Always
command:
- /metrics-server
- --kubelet-insecure-tls
- --kubelet-preferred-address-types=InternalIP
volumeMounts:
- name: tmp-dir
mountPath: /tmp
Apply metrics-server files again: kubectl apply -f metrics-server/deploy/1.8+
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created serviceaccount/metrics-server created deployment.extensions/metrics-server created service/metrics-server created clusterrole.rbac.authorization.k8s.io/system:metrics-server created clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
And check result:
kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
nginx-deployment Deployment/nginx-deployment <unknown>/80% 3 10 10 25
kubectl top nodes
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
kube-master-1 255m 12% 2582Mi 35%
kube-worker-1 124m 6% 2046Mi 27
kubectl top pods
NAME CPU(cores) MEMORY(bytes)
nginx-deployment-76bf4969df-4bbdc 0m 2Mi
nginx-deployment-76bf4969df-5m6xc 0m 2Mi
nginx-deployment-76bf4969df-b4zh7 0m 2Mi
nginx-deployment-76bf4969df-c58wl 0m 2Mi
nginx-deployment-76bf4969df-cktcg 0m 2Mi
nginx-deployment-76bf4969df-fbjj9 0m 2Mi
nginx-deployment-76bf4969df-gh94w 0m 2Mi
nginx-deployment-76bf4969df-qx6ld 0m 2Mi
nginx-deployment-76bf4969df-rvt54 0m 2Mi
nginx-deployment-76bf4969df-vq9gs 0m 2Mi
Additionally, if you are using autoscaling on a pod based on percent utilization of resources, the pod needs to have a resource limit provided, or else it will be unable to calculate the percentage of resources in use. In the absence of resource limits on the pod spec for a deployment, you will need to set you HPA to scale based on absolute values, with units included. In the case of CPU, you might set it to 300m
as a string. In the case of RAM, you might set it to 400Mi
, for example. The lack of a resource limit would explain why you don't see a current
value for the metric when you run kubectl get hpa
under TARGETS
.