I have multiple Node.js apps / Services running on Google Kubernetes Engine (GKE), Actually 8 pods are running. I didnot set up resources limit when I created the pods so now I'm getting CPU Unscheduled error.
I understand I have to set up resource limits. From what I know, 1 CPU / Node = 1000Mi ? My question is,
1) what's the ideal resource limit I should set up? Like the minimum? for a Pod that's rarely used, can I set up 20Mi? or 50Mi?
2) How many Pods are ideal to run on a single Kubernetes Node? Right now I have 2 Nodes set up which I want to reduce to 1.
3) what do people use in Production? and for development Cluster?
Here are my Nodes
Node 1:
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
default express-gateway-58dff8647-f2kft 100m (10%) 0 (0%) 0 (0%) 0 (0%)
default openidconnect-57c48dc448-9jmbn 100m (10%) 0 (0%) 0 (0%) 0 (0%)
default web-78d87bdb6b-4ldsv 100m (10%) 0 (0%) 0 (0%) 0 (0%)
kube-system event-exporter-v0.1.9-5c8fb98cdb-tcd68 0 (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system fluentd-gcp-v2.0.17-mhpgb 100m (10%) 0 (0%) 200Mi (7%) 300Mi (11%)
kube-system kube-dns-5df78f75cd-6hdfv 260m (27%) 0 (0%) 110Mi (4%) 170Mi (6%)
kube-system kube-dns-autoscaler-69c5cbdcdd-2v2dj 20m (2%) 0 (0%) 10Mi (0%) 0 (0%)
kube-system kube-proxy-gke-qp-cluster-default-pool-7b00cb40-6z79 100m (10%) 0 (0%) 0 (0%) 0 (0%)
kube-system kubernetes-dashboard-7b89cff8-9xnsm 50m (5%) 100m (10%) 100Mi (3%) 300Mi (11%)
kube-system l7-default-backend-57856c5f55-k9wgh 10m (1%) 10m (1%) 20Mi (0%) 20Mi (0%)
kube-system metrics-server-v0.2.1-7f8dd98c8f-5z5zd 53m (5%) 148m (15%) 154Mi (5%) 404Mi (15%)
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
CPU Requests CPU Limits Memory Requests Memory Limits
------------ ---------- --------------- -------------
893m (95%) 258m (27%) 594Mi (22%) 1194Mi (45%)
Node 2:
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
default kube-healthcheck-55bf58578d-p2tn6 100m (10%) 0 (0%) 0 (0%) 0 (0%)
default pubsub-function-675585cfbf-2qgmh 100m (10%) 0 (0%) 0 (0%) 0 (0%)
default servicing-84787cfc75-kdbzf 100m (10%) 0 (0%) 0 (0%) 0 (0%)
kube-system fluentd-gcp-v2.0.17-ptnlg 100m (10%) 0 (0%) 200Mi (7%) 300Mi (11%)
kube-system heapster-v1.5.2-7dbb64c4f9-bpc48 138m (14%) 138m (14%) 301656Ki (11%) 301656Ki (11%)
kube-system kube-dns-5df78f75cd-89c5b 260m (27%) 0 (0%) 110Mi (4%) 170Mi (6%)
kube-system kube-proxy-gke-qp-cluster-default-pool-7b00cb40-9n92 100m (10%) 0 (0%) 0 (0%) 0 (0%)
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
CPU Requests CPU Limits Memory Requests Memory Limits
------------ ---------- --------------- -------------
898m (95%) 138m (14%) 619096Ki (22%) 782936Ki (28%)
My plan is to move all this into 1 Node.
According to kubernetes official documentation
1) You can go low in terms of memory and CPU, but you need to give enough CPU and memory to pods to function properly. I have gone as low as to CPU 100 and Memory 200 (It is highly dependent on the application you're running also the number of replicas)
2) There should not be 100 pods per node
(This is the extreme case)
3) Production cluster are not of single node in any case. This is a very good read around kubernetes in production
But keep in mind, if you increase the number of pod on single node, you might need to increase the size (in terms of resources
) of node.
Memory and CPU usage tends to grow proportionally with size/load on cluster
Here is the official documentation stating the requirements