Kubernetes - Single Cluster or Multiple Clusters

2/17/2019

I'm migrating a number of applications from AWS ECS to Azure AKS and being the first production deployment for me in Kubernetes I'd like to ensure that it's set up correctly from the off.

The applications being moved all use resources at varying degrees with some being more memory intensive and others being more CPU intensive, and all running at different scales.

After some research, I'm not sure which would be the best approach out of running a single large cluster and running them all in their own Namespace, or running a single cluster per application with Federation.

I should note that I'll need to monitor resource usage per application for cost management (amongst other things), and communication is needed between most of the applications.

I'm able to set up both layouts and I'm sure both would work, but I'm not sure of the pros and cons of each approach, whether I should be avoiding one altogether, or whether I should be considering other options?

-- olliefinn
azure-aks
kubernetes

4 Answers

2/18/2019

A single cluster (with namespaces and RBAC) is easier to setup and manage. A single k8s cluster does support high load.

If you really want multiple clusters, you could try istio multi-cluster (istio service mesh for multiple cluster) too.

-- Vikram Hosakote
Source: StackOverflow

2/17/2019

As you said that communication is need among the applications I suggest you go with one cluster. Application isolation can be achieved by Deploying each application in a separate namespace. You can collect metrics at namespace level and can set resources quota at namespace level. That way you can take action at application level

-- P Ekambaram
Source: StackOverflow

2/17/2019

Because you are at the beginning of your kubernetes journey I would go with separate clusters for each stage you have (or at least separate dev and prod). You can very easily take your cluster down (I did it several times with resource starvation). Also not setting correctly those network policies you might find that services from different stages/namespaces (like test and sandbox) communicate with each other. Or pipelines that should deploy dev to change something in other namespace. Why risk production being affected by dev work?

Even if you don't have to upgrade the control plane yourself, aks still has its versions and flags and it is better to test them before moving to production on a separate cluster.

So my initial decision would be to set some hard boundaries: different clusters. Later once you get more knowledge with aks and kubernetes you can review your decision.

-- Liviu Costea
Source: StackOverflow

2/18/2019

Depends... Be aware AKS still doesn't support multiple node pools (On the short-term roadmap), so you'll need to run those workloads in single pool VM type. Also when thinking about multiple clusters, think about multi-tenancy requirements and the blast radius of a single cluster. I typically see users deploying multiple clusters even though there is some management overhead, but good SCM and configuration management practices can help with this overhead.

-- Strebel - MSFT
Source: StackOverflow