I am trying to understand the benefits and drawbacks of the following architectures when it comes to deploying my application and database containers using kubernetes.
A little background: The application sits behind an Nginx proxy. All requests flow from the proxy to the web server. The web server is the only thing that has access to the (read only) database.
Architecture 1:
Pod#1 - Database container only
Pod#2 - Application container only
Architecture 2:
Pod#1 - Database container & Application container
From my research so far, I have found comments recommending Architecture 1 for scaling reasons. https://linchpiner.github.io/k8s-multi-container-pods.html
Does anyone have insight onto which of these approaches would be better suited for my situation?
Being able to scale the application and database independently would be the key reason for having them separated. Scaling with high load (or highly variable load) requires a robust architecture and what counts as 'high load' will depend on your app. For example, if the database and application are in different pods then you could in theory run multiple replicas of the application (i.e. multiple Pods) and (if you wanted) just one replica of the database that all of the instances of the application point to. And you could have an nginx ingress controller routing to the application instances and load-balancing between them.
Running multiple replicas can give you the ability to scale up and down in response to load (see the HorizontalPodAutoscaler for example but you could also scale manually). It provides a level of fault-tolerance as one instance can become be overwhelmed and become unresponsive (or simply fail) without affecting the others (and the failing pod can also be automatically restarted by Kubernetes).
A potential snag to watch out for on running multiple replicas of your app, at least if it's an existing app that you're porting to kubernetes, is that your application does need to be written in stateless way to support this. Your db being read-only presumably means this isn't a problem at the data layer. Perhaps you could run multiple db replicas too and put use a Service so that your app instances could talk to them. But you'd also need to think about statefulness in the app e.g. Is authentication token-based and could different instances validate the token without requiring a new login?
It's not necessarily wrong to put the two containers in the same pod. You might still get some scaling benefits in your case as if your db is read-only then presumably the instances can't get out of sync. But then you can only scale them together and likewise each pair would fail together.
For the same reason, you don't put web server and DB on same machine or VM.
Two key reasons are Security and Performance aspect.
Your web may be public facing but your DB must not. You should aim to reduce the attack surface as much you can.
Also, you can do performance tuning of each tier independently e.g. scale out app-tier based on some metrics.
At the end, if you don't care about these considerations, putting both of them in same container/pod is easier to maintain.
HTH