I have a one a one node Kubernetes cluster and the memory usage reported by the metrics server does not seem to be the same as the memory usage shown with the free
command
# kubectl top nodes
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
<node_ip> 1631m 10% 13477Mi 43%
# free -m
total used free shared buff/cache available
Mem: 32010 10794 488 81 20727 19133
Swap: 16127 1735 14392
And the difference is significant ~ 3 GB.
I have also tested this on a 3 node cluster, and the issue is present there too:
# kubectl top nodes
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
<node_ip1> 1254m 8% 26211Mi 84%
<node_ip2> 221m 1% 5021Mi 16%
<node_ip3> 363m 2% 8731Mi 28%
<node_ip4> 1860m 11% 20399Mi 66%
# free -m (this is on node 1)
total used free shared buff/cache available
Mem: 32010 5787 369 1676 25853 24128
Swap: 16127 0 16127
Why is there a difference?
The answer for your question can be found here. It is a duplicate so you can remove this post from StackOverflow.
The metrics exposed by the Metrics Server are collected by an instance of cAdvisor on each node. What you see in the output of kubectl top node
is how cAdvisor determines the current resource usage.
So, apparently cAdvisor and free
determine the resource usage in different ways. To find out why, you would need to dig into internals of how cAdvisor and free
work.