Autoscaling with kubernetes

6/13/2017

Objective

A scalable/resilient IoT platform with potentially thousands of devices. We are using kubernetes to deploy this chain:

kafka --> logstash --> elasticsearch

Choice 1

We start with a static configuration and manually scale up as demand grows.
Pros: simple and works.
Cons: requires human intervention.

Choice 2

We build a software that monitors the different components, and automatically scale up when demand grows.
Pros: Does not require human intervention.
Cons: Complex.

It is complex because too many case arise, for example:

  • Kafka: Simply adding a new broker does distribute load, we have to create a new partition for a set of topics, or we initially create enough partitions then we reassign partitions.
  • Elasticsearch: If we want to distribute read load, we must add a data node, increase the number of replicas. If we want to distribute the write load we must use sharding. Data is split into shards, the number of shards cannot be altered after index creation, we can bypass this limitation by the use of aliases.

Autoscaling also involve interaction with the cloud provider to automatically add resources.

-- Baroudi Safwen
cloud
iot
kubernetes
scalability

0 Answers