I have a few kubefiles defining Kubernetes services and deployments. When I create a cluster of 4 nodes on GCP (never changes), all the small kube-system
pods are spread across the nodes instead of filling one at a time. Same with the pods created when I apply my kubefiles.
The problem is sometimes I have plenty of available total CPU for a deployment, but its pods can't be provisioned because no single node has that much free. It's fragmented, and it would obviously fit if the kube-system pods all went into one node instead of being spread out.
I can avoid problems by using bigger/fewer nodes, but I feel like I shouldn't have to do that. I'd also rather not deal with pod affinity settings for such a basic testing setup. Is there a solution to this, maybe a setting to have it prefer filling nodes in order? Like using an already opened carton of milk instead of opening a fresh one each time.
Haven't tested this, but the order I apply files in probably matters, meaning applying the biggest CPU users first could help. But that seems like a hack.
I know there's some discussion on rescheduling that gets complicated because they're dealing with a dynamic node pool, and it seems like they don't have it ready, so I'm guessing there's no way to have it rearrange my pods dynamically.
You can write your own scheduler. Almost all components in k8s are replaceable.
I know you won't. If you don't want to deal with affinity, you def won't write your own scheduler. But know that you have that option.
With GCP native, try to have all your pods with resource request and limits set up.