I am trying to figure out why Metric server isn't collecting stats from the node where it is deployed (r2s13). There are 3 nodes in my cluster (1 master and 2 workers).
metric server version: 0.3.1
kubernetes version: 1.12 (installed with kubeadm)
CNI plugin: weave net
kubectl top node
output:
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
r2s12 344m 4% 3079Mi 12%
r2s14 67m 0% 1695Mi 21%
r2s13
In metric server log, I have the below line repeated (just for the node where the metric server is deployed r2s13
):
E1023 15:28:14.643011 1 manager.go:102] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:r2s13: unable to fetch metrics from Kubelet r2s13 (10.199.183.218): Get https://10.199.183.218:10250/stats/summary/: dial tcp 10.199.183.218:10250: i/o timeout
I can't ping from the pod to the node where it is deployed.
I have added below config in metric server:
command:
- /metrics-server
- --kubelet-insecure-tls
- --kubelet-preferred-address-types=InternalIP
In my case it was because the firewall wouldn't allow incoming traffic from Weave.
Executing the following fixed the problem
ufw allow in on weave
ufw reload