I am trying to optimize the CPU resources allocated to a pod based on previous runs of that pod.
The only problem is that I have only been able to find how much CPU is allocated to a given pod, not how much CPU a pod is actually using.
I may be reading too much into how the question is worded: (quote) "how much CPU a pod is actually using"...even though the question also mentions (quote) "to optimize...based on previous runs". So:
For usage history - see Rico's answer.
For current usage, see kubectl top
. Use watch
to see usage stats every 2 seconds without having to run the command over and over. For example:
watch kubectl top pod <pod-name> --namespace=<namespace-name>
This can be helpful, too: https://kubernetes.io/docs/tasks/debug-application-cluster/resource-usage-monitoring/#resource-metrics-pipeline
consider github.com/dpetzold/kube-resource-explorer
# /opt/go/bin/kube-resource-explorer -namespace kube-system -reverse -sort MemReq
Namespace Name CpuReq CpuReq% CpuLimit CpuLimit% MemReq MemReq% MemLimit MemLimit%
--------- ---- ------ ------- -------- --------- ------ ------- -------- ---------
kube-system calico-node-sqh7m/calico-node 250m 3% 0m 0% 0Mi 0% 0Mi 0%
kube-system metrics-server-58699455bc-kz4r9/metrics-server 0m 0% 0m 0% 0Mi 0% 0Mi 0%
kube-system kube-proxy-hftdz/kube-proxy 100m 1% 0m 0% 0Mi 0% 0Mi 0%
kube-system kube-proxy-x72g6/kube-proxy 100m 1% 0m 0% 0Mi 0% 0Mi 0%
kube-system kube-proxy-fhtqm/kube-proxy 100m 1% 0m 0% 0Mi 0% 0Mi 0%
kube-system tiller-deploy-5b7c66d59c-b72hk/tiller 0m 0% 0m 0% 0Mi 0% 0Mi 0%
kube-system calico-node-xvfjf/calico-node 250m 3% 0m 0% 0Mi 0% 0Mi 0%
kube-system calico-node-ptq8l/calico-node 250m 3% 0m 0% 0Mi 0% 0Mi 0%
kube-system addon-http-application-routing-external-dns-855cdc4946-jh68m/addon-http-application-routing-external-dns 0m 0% 0m 0% 0Mi 0% 0Mi 0%
kube-system addon-http-application-routing-nginx-ingress-controller-6bfljzb/addon-http-application-routing-nginx-ingress-controller 0m 0% 0m 0% 0Mi 0% 0Mi 0%
kube-system calico-node-wsxp7/calico-node 250m 3% 0m 0% 0Mi 0% 0Mi 0%
kube-system calico-typha-86bcb74584-vwq5d/calico-typha 0m 0% 0m 0% 0Mi 0% 0Mi 0%
kube-system calico-typha-horizontal-autoscaler-79d4669c84-7kd6s/autoscaler 10m 0% 10m 0% 0Mi 0% 0Mi 0%
kube-system kube-proxy-xq5cq/kube-proxy 100m 1% 0m 0% 0Mi 0% 0Mi 0%
kube-system kube-svc-redirect-nqpf6/redirector 5m 0% 0m 0% 2Mi 0% 0Mi 0%
kube-system kube-svc-redirect-k4zrl/redirector 5m 0% 0m 0% 2Mi 0% 0Mi 0%
kube-system kube-svc-redirect-kx8l5/redirector 5m 0% 0m 0% 2Mi 0% 0Mi 0%
kube-system kube-svc-redirect-pwd5r/redirector 5m 0% 0m 0% 2Mi 0% 0Mi 0%
kube-system coredns-autoscaler-657d77ffbf-ld6jp/autoscaler 20m 0% 0m 0% 10Mi 0% 0Mi 0%
kube-system addon-http-application-routing-default-http-backend-74698cnzjt8/addon-http-application-routing-default-http-backend 10m 0% 10m 0% 20Mi 0% 20Mi 0%
kube-system kube-svc-redirect-nqpf6/azureproxy 5m 0% 0m 0% 32Mi 0% 0Mi 0%
kube-system kube-svc-redirect-k4zrl/azureproxy 5m 0% 0m 0% 32Mi 0% 0Mi 0%
kube-system kube-svc-redirect-pwd5r/azureproxy 5m 0% 0m 0% 32Mi 0% 0Mi 0%
kube-system kube-svc-redirect-kx8l5/azureproxy 5m 0% 0m 0% 32Mi 0% 0Mi 0%
kube-system kubernetes-dashboard-6f697bd9f5-sjtnf/main 100m 1% 100m 1% 50Mi 0% 500Mi 1%
kube-system tunnelfront-6bb9dcf868-hh6kp/tunnel-front 10m 0% 0m 0% 64Mi 0% 0Mi 0%
kube-system coredns-7fbf4847b6-gtnpb/coredns 100m 1% 0m 0% 70Mi 0% 170Mi 0%
kube-system coredns-7fbf4847b6-qcsgb/coredns 100m 1% 0m 0% 70Mi 0% 170Mi 0%
kube-system omsagent-rs-7b98f76d84-kj9v6/omsagent 50m 0% 150m 1% 175Mi 0% 500Mi 1%
kube-system omsagent-7m8vs/omsagent 75m 0% 150m 1% 225Mi 0% 600Mi 2%
kube-system omsagent-8xcng/omsagent 75m 0% 150m 1% 225Mi 0% 600Mi 2%
kube-system omsagent-q6dj4/omsagent 75m 0% 150m 1% 225Mi 0% 600Mi 2%
kube-system omsagent-whnbp/omsagent 75m 0% 150m 1% 225Mi 0% 600Mi 2%
kube-system cluster-autoscaler-7c694f79fd-rzftb/cluster-autoscaler 100m 1% 200m 2% 300Mi 1% 500Mi 1%
--------- ---- ------ ------- -------- --------- ------ ------- -------- ---------
Total 2240m/31644m 7% 1070m/31644m 3% 1795Mi/111005Mi 1% 4260Mi/111005Mi 3%
That information is not stored anywhere in Kubernetes. You typically can get the 'current' CPU utilization from a metrics endpoint.
You will have to use another system/database to store that information through time. The most common one to use is the open-source time series DB Prometheus. You can also visualize its content using another popular tool: Grafana. There are other open-source alternatives too. For example, InfluxDB.
Also, there are a ton of commercial solutions that support Kubernetes metrics. For example:
From docker, you can query a container with docker stats
. To show the instantaneous stats, including CPU, memory, and network usage, of all containers:
docker stats --no-stream
For gathering the metrics over time, cAdvisor, Prometheus, and Grafana together are a common open source metrics gathering, storage, and viewing set of tools.