I am trying to run a small app in a g1 GKE instance (g1 instance has 1 vCPU, or 1000 millicores), and having issues with CPU request limits when scheduling pods. There are 4 pods, each being a different part of the app: Django web application, SQL service, and two helper Python processes.
The pods have been set up in the default namespace, so 100m are allocated for each by default. Turns out that Kube-system takes up 730 millicores on the node, so I have 270m left to distribute between the pods, and that's why only two pods start up and others are left hanging in the pending state. To get all the pods started I need to reduce each of their CPU quota (or reconsider the design).
I can guess roughly which pod would require more or less CPU. What would be a reasonable way to estimate the minimal millicore requirement for each of the pods?
If you have Heapster deployed in Kubernetes then you should be able to issue kubectl top pods
straight after launching a pod. Append -n kube-system
to view pods in the kube-system namespace.
This displays pod metrics in the following format:
NAME CPU(cores) MEMORY(bytes)
------------15186790-1swfm 0m 44Mi
------------88929288-0nqb1 0m 12Mi
------------22666682-c6cb5 0m 43Mi
------------85400619-k5vhh 6m 74Mi
However, do remember that these metrics will change depending on the load and may vary quite a bit.