How to scale web application in kubernetes?

11/11/2021

Let's consider a python web application deployed under uWSGI via Nginx.

HTTP client ↔ Nginx ↔ Socket/HTTP ↔ uWSGI (web server) ↔ webapp

Where nginx is used as reverse proxy / load balancer.

How to scale this kind of applications in kubernetes? Several options come to my mind: 1. Deploy nginx and uWSGI in a single pod. Simple approach. 2. Deploy nginx + uWSGI in single container? Violate the “one process per container” principle. 3. Deploy only a uWSGI (via HTTP). Omit the usage of nginx.

or there is another solution, involving nginx ingress/load balancer services?

-- guesswho
kubernetes
nginx

1 Answer

11/11/2021

It depends.

I see two scenarios:

  1. Ingress is used

    In this case there's no need to have nginx server within the pod, but it can be ingress-nginx which will be balancing traffic across a kubernetes cluster. You can find a good example in this comment on GitHub issue.

  2. No ingress is used.

    In this case I'd go with option 1 - Deploy nginx and uWSGI in a single pod. Simple approach.. This way you can easily scale in/out your application and don't have any complicated/unnecessary dependencies.

In case you're not familiar with what ingress is, please find kubernetes documentation - ingress.

-- moonkotte
Source: StackOverflow