Kubernetes breaks (no response from kubectl
) when I have too many Pods running in the cluster (1000 pods).
There are more than enough resources (CPU and memory), so it seems to me that some kind of controller is breaking and unable to handle a large number of Pods.
The workload I need to run can be massively parallel processed, hence I have a high number of Pods.
Actually, I would like to be able to run many more times 1000 Pods. Maybe even 100,000 Pods.
My Kubernetes master node is an AWS EC2 m4.xlarge
instance.
My intuition tells me that it is the master node's network performance that is holding the cluster back?
Any ideas?
Details:
I am running 1000 Pods in a Deployment.
when I do kubectl get deploy
it shows:
DESIRED CURRENT UP-TO-DATE AVAILABLE
1000 1000 1000 458
and through my application-side DB, I can see that there are only 458 Pods working.
when I do kops validate cluster
I receive the warning:
VALIDATION ERRORS
KIND NAME MESSAGE
ComponentStatus controller-manager component is unhealthy
ComponentStatus scheduler component is unhealthy
Pod kube-system/kube-controller-manager-<ip>.ec2.internal
kube-system pod
"kube-controller-manager-<ip>.ec2.internal" is not healthy
Pod
kube-system/kube-scheduler-<ip>.ec2.internal
kube-system pod "kube-scheduler-<ip>.ec2.internal" is not healthy
The fact that it takes a long time to list your pods is not really about your nodes as they will able to handle pods as much depending on the resources they have such CPUs and Memory.
The issue you are seeing is more about the kubeapi-server
being able query/reply a large number of pods or resources.
So the two contention points here are the kube-apiserver and etcd where the state for everything in a Kubernetes cluster is stored. So you can focus on optimizing those two components and the faster you'll get responses from say kubectl get pods
(Networking is another contention point but that's if you are issuing kubectl commands from a slow broadband connection).
You can try:
Setting up an HA external etcd cluster with pretty beefy machines and fast disks.
Upgrade the machines where your kubeapi-server
(s) lives.
Follow more guidelines described here.