I am setting up a pod say test-pod on my google kubernetes engine. When I deploy the pod and see in workloads using google console, I am able to see 100m CPU
getting allocated to my pod by default, but I am not able to see how much memory my pod has consumed. The memory requested section always shows 0
there. I know we can restrict memory limits and initial allocation in the deployment YAML. But I want to know how much default memory a pod gets allocated when no values are specified through YAML and what is the maximum limit it can avail?
The real problem in many of these cases is not that the nodes are too small, but that we have not accurately specified resource limits for the pods.
Resource limits are set on a per-container basis using the resources property of a containerSpec, which is a v1 api object of type ResourceRequirements. Each object specifies both “limits” and “requests” for the types of resources.
If you do not specify a memory limit for a container, one of the following situations applies:
The container has no upper bound on the amount of memory it uses. The container could use all of the memory available on the Node where it is running which in turn could invoke the OOM Killer. Further, in case of an OOM Kill, a container with no resource limits will have a greater chance of being killed.
The container is running in a namespace that has a default memory limit, and the container is automatically assigned the default limit. Cluster administrators can use a LimitRange to specify a default value for the memory limit.
When you set a limit, but not a request, kubernetes defaults the request to the limit. If you think about it from the scheduler’s perspective it makes sense.
It is important to set correct resource requests, setting them too low makes that nodes can get overloaded; too high makes that nodes will stuck idle.
Useful article: memory-limits.
If you have no resource requests on your pod, it can be scheduled anywhere at all, even the busiest node in your cluster, as though you requested 0 memory and 0 CPU. If you have no resource limits and can consume all available memory and CPU on its node.
(If it’s not obvious, realistic resource requests and limits are a best practice!)
You can set limits on individual pods
If not , you can set limits on the overall namespace
Defaults , no limits
But there are some ticks:
Here is a very nice view of this:
https://blog.balthazar-rouberol.com/allocating-unbounded-resources-to-a-kubernetes-pod
When deploying a pod in a Kubernetes cluster, you normally have 2 choices when it comes to resources allotment:
defining CPU/memory resource requests and limits at the pod level
defining default CPU/memory requests and limits at the namespace level using a LimitRange
From Docker documentation ( assuming u are using docker runtime ):
By default, a container has no resource constraints and can use as much of a given resource as the host’s kernel scheduler will allow
https://docs.docker.com/v17.09/engine/admin/resource_constraints/
Kubernetes pods' CPU and memory usage can be seen using the metrics-server service and the kubectl top pod
command:
$ kubectl top --help
...
Available Commands:
...
pod Display Resource (CPU/Memory/Storage) usage of pods
...
Example in Minikube below:
minikube addons enable metrics-server
# wait 5 minutes for metrics-server to be up and running
$ kubectl top pod -n=kube-system
NAME CPU(cores) MEMORY(bytes)
coredns-fb8b8dccf-6t5k8 6m 10Mi
coredns-fb8b8dccf-sjkvc 5m 10Mi
etcd-minikube 37m 60Mi
kube-addon-manager-minikube 17m 20Mi
kube-apiserver-minikube 55m 201Mi
kube-controller-manager-minikube 30m 46Mi
kube-proxy-bsddk 1m 11Mi
kube-scheduler-minikube 2m 12Mi
metrics-server-77fddcc57b-x2jx6 1m 12Mi
storage-provisioner 0m 15Mi
tiller-deploy-66b7dd976-d8hbk 0m 13Mi
This link has more information.