Background:
I have an EKS cluster with 2 nodes (t3.small).
The cluster is having several pods including:
- 1 pod for web frontend
- 1 pod for backend
- AWS ALB controller
- External DNS
Current behavior:
- All backend and web frontend pods + other AWS ALB pods (cert-manager, cert-manager-cainjector, cert-manager-webhook, app-backend-deployment, app-frontend-deployment, external-dns, aws-load-balancer-controller, etc. - 11 pods in total) are all allocated into 1 single node.
- The other node is only running 2 pods (aws-node & kube-proxy), meaning no application nodes are assigned to.
- Consequence: One node is frequently down or goes to non-ready status due to CPU/memory shortage, while the other is completely free and not being used.
Desired behavior (or my opinionated expected behavior): The pods should be allocated more equally.
Am I missing anything in the configuration?