I have a k8s v1.10.2, 3x3, cluster which I was trying to stress. I put together the command:
kubectl run stress --replicas=1 --image=lorel/docker-stress-ng -- --cpu 8 --io 8 --vm 4 --vm-bytes 1024m --fork 4 --timeout 5m --metrics-brief
and when I look at the node usage via:
kubectl describe node addons-worker-01
the node reports no usage by the pod:
Non-terminated Pods: (6 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
default stress-765b45bdd5-qwqbj 0 (0%) 0 (0%) 0 (0%) 0 (0%)
yet when I look at top, the node shows the usage I would expect. Is this expected? Am I missing something?
kubctl describe
is used to show your current node configuration.
For example:
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
default stress 100m (1%) 0 (0%) 0 (0%) 0 (0%)
kube-system fluentd 100m (1%) 0 (0%) 200Mi (0%) 300Mi (1%)
kube-system kube-dns 260m (3%) 0 (0%) 110Mi (0%) 170Mi (0%)
kube-system kube-proxy-gke-cluster 100m (1%) 0 (0%) 0 (0%) 0 (0%)
kube-system kubernetes-dashboard 100m (1%) 100m (1%) 100Mi (0%) 300Mi (1%)
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
CPU Requests CPU Limits Memory Requests Memory Limits
------------ ---------- --------------- -------------
660m (8%) 100m (1%) 410Mi (1%) 770Mi (2%)
It means that there are 5 pods on the node and, for example, fluentd requires 100m of CPU to start with no limit set and 200M of memory to start which is limited to 300M.
You can set limits inside a config yaml
file for the pod like the following:
spec:
containers:
- name: text
image: nginx
resources:
limits:
memory: 512Mi
requests:
memory: 128Mi
More about setting limits of memory and CPU you can read in Kubernetes manage-resources docs.
For monitoring I would recommend Prometheus or Google Cloud Monitoring.
You can also use kubctl top nodes
which will show current load on the nodes:
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
gke-cluster-1-default-pool-1 7969m 100% 4708Mi 17%
gke-cluster-1-default-pool-2 56m 0% 491Mi 1%
gke-cluster-1-default-pool-3 60m 0% 568Mi 2%