Best practice to deploy microservices into kubernetes

12/12/2019

Considering the following scenario:

  • I have one main heavy service and many other small microservices.
  • The small microservices are consumed only the heavy service.
  • Only the main heavy service is exposed to the public internet.

What is the best practice to deploy those services into ?

All together in the same kubernetes cluster:

  • The main heavy service LoadBalancer
  • The other small microservices as ClusterIP (to protect them from public internet).

Is that a good approach ?

-- kilobaik
architecture
kubernetes
microservices

3 Answers

12/12/2019

There are some misunderstandings here.

There terminology about microservice is not about size but more an organizational thing. Ten years ago, the whole system was deployed as a monolith but now it is recommended that teams should not be bigger than 5-8 persons - and those teams should work in their own pace with their own deployment cycle. So the monolith has to be broken into smaller services. The services in such an architectural pattern is called for microservices - but not that they are small or big.

All your services should be deployed as a Deployment on Kubernetes, and the services should be stateless. So even the "main heavy service" should be stateless and possibly scaled to multiple replicas.

You are correct in that only services that need to be exposed to the Internet should be exposed to the Internet.

Wether your "heavy service" should be exposed with a Service of type LoadBalancer or NodePort actually depends more on what Ingress Controller you are using. E.g. If you are using Google Kubernetes Engine, you should exposed it as a NodePort type. And yes, the other applications should have a Service of type ClusterIP.

It is worth noting that all Kubernetes Service objects will provide load balancing functionality to the replicas. The service type, e.g. LoadBalancer, NodePort or ClusterIP is more about how the Service will be exposed.

-- Jonas
Source: StackOverflow

12/12/2019

Yes you are right, The good approach is to use load-balancer to manage the low and high traffic from the public. When you define min and max pods, it will increase the no of pods for heavy traffic and decrease for the low traffic automatically. And for the services you don't want to expose to the public make them ClusterIP.

-- M Hamza Razzaq
Source: StackOverflow

12/14/2019

For other services, you can use k8s Horizontal Pod AutoScaling. In that case, pod numbers will scale based on traffic. It works particularly well for sudden spike and you can ensure proper usage of resources.

For easily integrating microservices and manage traffic flow among them, you can use Istio.

-- tarekgreens
Source: StackOverflow