Docker Compose: deploying different services from docker-compose.yml to different set of hosts

7/15/2016

Let's say I have a docker-compose.yml file with 3 different services (s1, s2, s3). Then if I deploy them, on say AWS ECS (just for example) cluster with one host, all the three containers will go to that host. If I scale the cluster 2 hosts, then the second hosts, then the second host will also get all the three containers.

Ideally, I'd want to have different clusters for different services, so that they can be scaled independently. I'd not want to have my database container on the same cluster as my backend container as both of them have different scaling needs.

How will I achieve this kind of behaviour with docker compose?

Kubernetes has concept of pods which kind of provides this abstraction, however since that's not a part of docker, I want to know *how would one develop multi-service application in docker in which each service (as defined in docker-compose.yml) can be scaled independently. *

-- Jatin
amazon-ecs
docker
docker-compose
kubernetes

2 Answers

7/15/2016

For ECS you would need to either create multiple clusters (i.e. a cluster for each piece of the infrastructure if you're sticking with deploying via compose), or just create multiple tasks. Each task should be a layer in your stack (i.e. api or web, etc). Then you can scale the layers independently.

The big difference you'll find between ECS and K8S is that since ECS uses host port mappings, you can't have 2 different tasks expose the same port and run on the same host.

Check out this aws article as well: http://docs.aws.amazon.com/AmazonECS/latest/developerguide/application_architecture.html

-- Steve Sloka
Source: StackOverflow

7/15/2016

Let's get the terminology straight here:

docker-compose deploys docker containers on a single host.

docker-swarm is the tool to use to deploy containers on multiple hosts.

A cluster in general, is a set of hosts (or nodes), either physical machines or VMs, that work together.

A Pod is not a cluster: it is a set of containers that are guaranteed to run on a single node, grouped together and communicating via localhost.

In Kubernetes, a deployment will schedule containers on all available nodes based on replication policies, node resources, and affinity, so you don't define where a container goes: Kubernetes manages that for you.

You then scale in 2 ways:

  • scale the number of instances of a container, by simply increasing it's replication factor (or you can also use auto-scale with defined policy)
  • scale the cluster, by adding new nodes (physical or VMs), therefore adding resources to the cluster.
-- MrE
Source: StackOverflow