Do I need an nginx container in a Kubernetes pod for a web app?

5/15/2019

This is an awfully basic question, but I just don't know the answer. I have an app which should have three containers -- front end, back end and database containers.

Now they serve on different ports request data from different ports.

So I read that in a Pod, it's a local network and the containers can communicate. Does nginx come in to this? My understanding is that it does not as the pod manages the comms between the containers. My understanding is that nginx only is required for serving outside requests and load balancing across a cluster of identical containers round-robin style.

If someone could help me out with understanding this I'd be ver grateful.

-- David Boshton
kubernetes
nginx

4 Answers

5/15/2019

Nginx can serve your static web page (web app) but connect DB not. Nginx would be a proxy in advance case like being a ingress controller to you web app/font end. Font end and Back end would a different pod communicating using services of the type ClusterIP. Fontend service would be a Nodeport service.

-- Prashant Patel
Source: StackOverflow

5/15/2019
  1. You deploy FE, BE, and DB in different pods (different deployments) to scale/manage them separately. Better even in different namespaces.
  2. Create a k8s service of type ClusterIP for BE & DB. Use k8s DNS resolver to access them - service-name.namespace.svc.cluster.local
  3. Create a k8s service of type LoadBalancer or NodePort for FE to expose it outside of k8s. Use Load Balancer address or node-ip:node port to access it.
-- Max Lobur
Source: StackOverflow

5/16/2019

In addition to @Max Lobur answer, it is also important to mention here about Ingress Kubernetes resource, this is a way how you can expose frontend application services outside the cluster and manage access to them. Actually, Ingress is a logical resource element which describes a set of rules for traffic management purposes via Ingress Controller. Therefore, Ingress controller can play a role of API Gateway by delivering L7 network facilities like: load balancing, SSL termination and HTTP/HTTPS traffic routing for nested application services. You might consider to look at the most popular solutions: NGINX Ingress Controller, Traefik, Istio, etc.

-- mk_sta
Source: StackOverflow

5/15/2019

I would advise against putting those 3 containers inside the same pod. If you did, you would lose out on many advantages like being able to scale services independently, or being able to update one component without disrupting the whole stack. Pods should only be used used to group tightly coupled containers. As the Kubernetes documentation puts it:

A Pod represents a unit of deployment: a single instance of an application in Kubernetes, which might consist of either a single container or a small number of containers that are tightly coupled and that share resources.

The way you should approach this is making a Pod for each component and put those components behind a Service so the Pods can interact with one another. Once you are comfortable doing that, you should also look at Deployments and Statefulsets to allow you to scale your applications and provide recovery in case an application failures.

-- Alassane Ndiaye
Source: StackOverflow