How can I dimension the Nodes (cpu, memory) in a Kind Cluster?

6/20/2020

I am a newbie and I may ask a stupid question, but I could not find answers on Kind or on stackoverflow, so I dare asking:

  • I run kind (Kubernestes-in-Docker) on a Ubuntu machine, with 32GB memory and 120 GB disk.
  • I need to run a Cassandra cluster on this Kind cluster, and each node needs at least 0.5 CPU and 1GB memory.

When I look at the node, it gives this:

Capacity:
  cpu:                8
  ephemeral-storage:  114336932Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             32757588Ki
  pods:               110
Allocatable:
  cpu:                8
  ephemeral-storage:  114336932Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             32757588Ki
  pods:               110

so in theory, there is more than enough resources to go. However, when I try to deploy the cassandra deployment, the first Pod keeps in a status 'Pending' because of a lack of resources. And indeed, the Node resources look like this:

Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests   Limits
  --------           --------   ------
  cpu                100m (1%)  100m (1%)
  memory             50Mi (0%)  50Mi (0%)
  ephemeral-storage  0 (0%)     0 (0%)
  hugepages-1Gi      0 (0%)     0 (0%)
  hugepages-2Mi      0 (0%)     0 (0%)

The node does not get actually access to the available resources: it stays limited at 10% of a CPU and 50MB memory.

So, reading the exchange above and having read #887, I understand that I need to actually configure Docker on my host machine in order for Docker to allow the containers simulating the Kind nodes to grab more resources. But then... how can give such parameters to Kind so that they are taken into account when creating the cluster ?

-- Thierry Souche
kind
kubernetes

1 Answer

6/24/2020

\close

Sorry for this post: I finally found out that the issue was related to the storageclass not being properly configured in the spec of the Cassandra cluster, and not related to the dimensioning of the nodes.

I changed the cassandra-statefulset.yaml file to indicate the 'standard' storageclass: this storageclass is provisionned by default on a KinD cluster since version 0.7. And it works fine. Since Cassandra is resource hungry, and depending on the machine, you may have to increase the timeout parameters so that the Pods would not be considered faulty during the deployment of the Cassandra cluster. I had to increase the timouts from respectively 15 and 5s, to 25 and 15s.

This topic should be closed.

-- Thierry Souche
Source: StackOverflow