Why does Kubernetes pod CPU usage vary drastically between nodes (DaemonSets)?

2/14/2021

Preface: I'm a Kubernetes novice standing up their own cluster at home, I have 1 master and 3 worker nodes.

I'm struggling to determine why some of my pods managed by DaemonSets are using more CPU than their counterparts on other nodes.

Here's an excerpt from kubectl top pod --all-namespaces results:

NAMESPACE NAME CPU(cores) Node
kube-system coredns-p7xkg 20m Master
kube-system coredns-ztwnn 66m Worker 1 kube-system coredns-2n44b 68m Worker 2
kube-system coredns-smhnb 15m Worker 3
kube-system kube-flannel-ds-j4f6l 9m Master
kube-system kube-flannel-ds-fwwqg 67m Worker 1
kube-system kube-flannel-ds-sm7g6 44m Worker 2
kube-system kube-flannel-ds-qk9vq 11m Worker 3
metallb-system speaker-lfp8n 22m Master
metallb-system speaker-6plw9 100m Worker 1
metallb-system speaker-gt4fm 99m Worker 2
metallb-system speaker-bntfk 27m Worker 3

As you can see above, the issue exists with workers 1 and 2 across the 3 different DaemonSets.

Master features: controller manager, api server, dashboard and metrics scraper, metallb controller, in addition to the above DaemonSet pods Worker #1 features: the above + kube proxy Worker #2 features: mariadb, metrics server, elastic quickstart es/kb/operator Worker #3 features: phpmyadmin, gitea, splunk, nexus

I suspect there may be some networking issue causing the high CPU usage on worker nodes # 1 and #2 but nothing in the pod logs is jumping out at me.

CoreDNS v1.7.0 Flannel v0.13.1-rc1 MetalLB v0.9.5

Does anybody have any suggestions as to what to check to get to the bottom this?

Thanks in advance!

-- Forreyer
coredns
flannel
kubernetes
metallb

0 Answers