How to archive multiple isolated instances of a micro service app in kubernetes?

2/19/2020

We developed an application which consist out of a few go/java services, a mongoDB and a reverse proxy which forwards REST-calls to the specific service. Each service runs in an own docker container. The whole app is deployable with a single docker-compose file.

We successfully managed to deploy the app in a kubernetes cluster.

Now the "tricky" part: We want to deploy one isolated instance of the app for each customer. (remember one instance consists of approximately 10 containers)

In the past we reached this goal by deploying multiple instances of the docker-compose file.

What is the recommended way in Kubernetes to reach this?

Thank you very much.

-- Jonas Fleck
azure-kubernetes
docker
docker-compose
kubernetes
kubernetes-ingress

2 Answers

2/20/2020

In kuberenetes, you can package all your resources into a helm chart (https://helm.sh/docs/topics/charts/) so that you deploy different instances of each and can manage its lifecycle. You can also pass parameters for each of the instances if required.

Another method is by deploying your application instance using kubernetes operators (https://kubernetes.io/docs/concepts/extend-kubernetes/operator/). This also helps in managing your application components.

-- anmol agrawal
Source: StackOverflow

2/20/2020

Applications can be separated via simple naming and labels or namespaces. Separation could go even further into restricting the nodes an instance may run on or even running separate clusters.

Network policies can be applied on top of deployment to improve network isolation. This would be needed to emulate the docker-compose "network bridge per instance" setup.

"Isolated" can mean a lot of things though as there are various layers where the term can be applied in various ways.

Naming

Many instances of a deployment can run intermingled on a cluster as long as the naming of each kubernetes resource doesn't clash. This includes the labels applied (and sometimes annotations) that are used to select or report on apps so you can uniquely identify a customers resources.

kubectl create -f deployment-customer1.yaml
kubectl create -f deployment-customer2.yaml

This type of naming is easier to manage with a deployment mechanism like helm. Helm "charts" describe a release and are built with the base concept of a variable "release name", so yaml templates can rely on variables. The average helm release would be:

helm install -f customer1-values.yaml customer1-app me/my-app-chart
helm install -f customer2-values.yaml customer2-app me/my-app-chart

Namespaces

A namespace is a logical grouping of resources in a cluster. By itself, a namespace only provides naming isolation but a lot of other k8s resources can then depend on a namespace to apply to:

A namespace per customer/instance may be useful, for example if you had a "premium" customer that get a bigger slice of resources quotas. It may also make labelling and selecting instances easier, which Network Policy will use.

Environments can be a good fit for a namespace, so a similar deployment can go to the dev/test/prod ns. If you are giving users access to manage or query Kubernetes resources themselves, namespaces make management much easier.

Managing namespaced resources might look like:

kubectl create ns customer1
kubectl create -f deployment.yaml -n customer1
kubectl create ns customer2
kubectl create -f deployment.yaml -n customer2

Again, helm is equally applicable to namespaced deployments.

DNS is probably worth a mention too, containers will look up host names in their own namespace by default. In namespace customer1, looking up the host name service-name will resolve to service-name.customer1.svc.cluster.local

Similarly in the namespace customer2: A lookup for service-name is service-name.customer2.svc.cluster.local

Nodes

Customers could be pinned to a particular nodes (VM or physical) to provide security and/or resource isolation from other customers.

Clusters

Cluster separation can provide full security, resource and network isolation without relying on kubernetes to manage it.

Large apps can often end up using a complete cluster per "grouping". This has a huge overhead of management for each cluster but allow closer to complete independence between instances. Security can be a big driver for this, as you can provide a layer of isolation between clusters outside of the Kubernetes masters.

Network Policy

A network policy lets you restrict network access between Pods/Services via label selectors. Kubernetes will actively manage the firewall rules wherever the Pods are scheduled in the cluster. This would be required to provide similar network isolation to docker-compose creating a network per instance.

The cluster will need to use a network plugin (CNI) that supports network policies, like Calico.

-- Matt
Source: StackOverflow