K8S/EKS - rebalancing memory ratio

11/15/2021

why does the scheduler in 1.20 does not try to rebalance the memory distribution in the following statistics, the usage is taken by x3 replicas setup with homogenous amount of memory taken per replca service (approx 200~ Mi diff), why doesn't the scheduler move away loads automatically, there are 3 minimum nodes guaranteed and affinity for replicas on same host or same AZ when there are 3 AZs I don't understand why this is happening.

/snap/bin/kubectl version
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.12", GitCommit:"4bf2e32bb2b9fdeea19ff7cdc1fb51fb295ec407", GitTreeState:"clean", BuildDate:"2021-10-29T02:43:48Z", GoVersion:"go1.15.15", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"20+", GitVersion:"v1.20.7-eks-d88609", GitCommit:"d886092805d5cc3a47ed5cf0c43de38ce442dfcb", GitTreeState:"clean", BuildDate:"2021-07-31T00:29:12Z", GoVersion:"go1.15.12", Compiler:"gc", Platform:"linux/amd64"}
kubectl top nodes
NAME                                         CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
some-ip.eu-west-2.compute.internal    112m         2%     2238Mi          33%
some-ip.eu-west-2.compute.internal   859m         5%     14881Mi         52%
some-ip.eu-west-2.compute.internal    887m         5%     18485Mi         65%
some-ip.eu-west-2.compute.internal   1900m        11%    27727Mi         97%
some-ip.eu-west-2.compute.internal   368m         9%     3975Mi          60%
some-ip.eu-west-2.compute.internal   196m         5%     3602Mi          53%
some-ip.eu-west-2.compute.internal   1450m        9%     27539Mi         97% (edited) 
-- anVzdGFub3RoZXJodW1hbg
amazon-eks
kubernetes

0 Answers