Is there a way to get the actual resource (CPU and memory) constraints inside a container?
Say the node has 4 cores, but my container is only configured with 1 core through resource requests/limits, so it actually uses 1 core, but it still sees 4 cores from /proc/cpuinfo. I want to determine the number of threads for my application based on the number of cores it can actually use. I'm also interested in memory.
You can check node capacities and amounts allocated with the kubectl describe nodes
command. For example:
kubectl describe nodes e2e-test-node-pool-4lw4
Each Container of a Pod can specify one or more of the following:
spec.containers[].resources.limits.cpu
spec.containers[].resources.limits.memory
spec.containers[].resources.requests.cpu
spec.containers[].resources.requests.memory
You can use the Downward API to access the resource requests and limits. There is no need for service accounts or any other access to the apiserver for this.
Example:
apiVersion: v1
kind: Pod
metadata:
name: dapi-envars-resourcefieldref
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox:1.24
command: [ "sh", "-c"]
args:
- while true; do
echo -en '\n';
printenv MY_CPU_REQUEST MY_CPU_LIMIT;
printenv MY_MEM_REQUEST MY_MEM_LIMIT;
sleep 10;
done;
resources:
requests:
memory: "32Mi"
cpu: "125m"
limits:
memory: "64Mi"
cpu: "250m"
env:
- name: MY_CPU_REQUEST
valueFrom:
resourceFieldRef:
containerName: test-container
resource: requests.cpu
divisor: "1m"
- name: MY_CPU_LIMIT
valueFrom:
resourceFieldRef:
containerName: test-container
resource: limits.cpu
divisor: "1m"
- name: MY_MEM_REQUEST
valueFrom:
resourceFieldRef:
containerName: test-container
resource: requests.memory
- name: MY_MEM_LIMIT
valueFrom:
resourceFieldRef:
containerName: test-container
resource: limits.memory
restartPolicy: Never
Test:
$ kubectl logs dapi-envars-resourcefieldref
125
250
33554432
67108864
Kubernetes translates resource requests and limits to kernel primitives. It is possible to access that information from the pod too, but considerably more complicated and also not portable (Window$ nodes, anyone?)
/sys/fs/cgroup/cpu/kubepods/..QOS../podXX/cpu.*
: cpu.shares (this is requests; divide by 1024 to get core percentage), cpu.cfs_period_us, cpu.cfs_quota_us (divide cfs_quota_us by cfs_period_us to get cpu limit, relative to 1 core)/sys/fs/cgroup/memory/kubepods/..QOS../podXX/memory.limit_in_bytes
/proc/..PID../oom_score_adj
. Good luck calculating that back to memory request amount :)Short answer is great, right? ;)