I currently have a cluster running on GCloud which I created with 3 nodes. This is what I get when I run kubectl describe nodes
Name: node1
Capacity:
cpu: 1
memory: 3800808Ki
pods: 40
Non-terminated Pods: (3 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
───────── ──── ──────────── ────────── ─────────────── ─────────────
default my-pod1 100m (10%) 0 (0%) 0 (0%) 0 (0%)
default my-pod2 100m (10%) 0 (0%) 0 (0%) 0 (0%)
kube-system fluentd-cloud-logging-gke-little-people-e39a45a8-node-75fn 100m (10%) 100m (10%) 200Mi (5%) 200Mi (5%)
Allocated resources:
(Total limits may be over 100%, i.e., overcommitted. More info: http://releases.k8s.io/HEAD/docs/user-guide/compute-resources.md)
CPU Requests CPU Limits Memory Requests Memory Limits
──────────── ────────── ─────────────── ─────────────
300m (30%) 100m (10%) 200Mi (5%) 200Mi (5%)
Name: node2
Capacity:
cpu: 1
memory: 3800808Ki
pods: 40
Non-terminated Pods: (4 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
───────── ──── ──────────── ────────── ─────────────── ─────────────
default my-pod3 100m (10%) 0 (0%) 0 (0%) 0 (0%)
kube-system fluentd-cloud-logging-gke-little-people-e39a45a8-node-wcle 100m (10%) 100m (10%) 200Mi (5%) 200Mi (5%)
kube-system heapster-v11-yi2nw 100m (10%) 100m (10%) 236Mi (6%) 236Mi (6%)
kube-system kube-ui-v4-5nh36 100m (10%) 100m (10%) 50Mi (1%) 50Mi (1%)
Allocated resources:
(Total limits may be over 100%, i.e., overcommitted. More info: http://releases.k8s.io/HEAD/docs/user-guide/compute-resources.md)
CPU Requests CPU Limits Memory Requests Memory Limits
──────────── ────────── ─────────────── ─────────────
400m (40%) 300m (30%) 486Mi (13%) 486Mi (13%)
Name: node3
Capacity:
cpu: 1
memory: 3800808Ki
pods: 40
Non-terminated Pods: (3 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
───────── ──── ──────────── ────────── ─────────────── ─────────────
kube-system fluentd-cloud-logging-gke-little-people-e39a45a8-node-xhdy 100m (10%) 100m (10%) 200Mi (5%) 200Mi (5%)
kube-system kube-dns-v9-bo86j 310m (31%) 310m (31%) 170Mi (4%) 170Mi (4%)
kube-system l7-lb-controller-v0.5.2-ae0t2 110m (11%) 110m (11%) 70Mi (1%) 120Mi (3%)
Allocated resources:
(Total limits may be over 100%, i.e., overcommitted. More info: http://releases.k8s.io/HEAD/docs/user-guide/compute-resources.md)
CPU Requests CPU Limits Memory Requests Memory Limits
──────────── ────────── ─────────────── ─────────────
520m (52%) 520m (52%) 440Mi (11%) 490Mi (13%)
Now, as you can see, I have 3 pods, 2 on node1 and 1 on node2. What I would like to do is to move all pods on node1 and delete the two other nodes. However, there seem to be pods belonging to the kube-system
namespace and I don't know what effect deleting them might have.
I can tell that the pods named fluentd-cloud-logging...
or heapster..
are used for logging and computer resources usage, but I don't really know if I can move the pods kube-dns-v9-bo86j
and l7-lb-controller-v0.5.2-ae0t2
to another node without repercussions.
Can anyone help with some insight as to how should I proceed?
Thank you very much.
Killing them so that they'll be rescheduled on another node is perfectly fine. They can all be rescheduled other than the fluentd pods, which are bound one to each node.
If you want to downsize your cluster, you can just delete two of the three nodes and let the system reschedule any pods that were lost when the nodes were removed. If there isn't enough space on the remaining node you may see the pods go permanently pending. Having the kube-system pods pending isn't ideal because each of them performs a "system function" for your cluster (e.g. DNS, monitoring, etc) and without them running your cluster won't be fully functional.
You can also disable some of the system pods if you don't need their functionality using the gcloud container clusters update
command.