How to handle resource limits for apache in kubernetes

4/16/2018

I'm trying to deploy a scalable web application on google cloud. I have kubernetes deployment which creates multiple replicas of apache+php pods. These have cpu/memory resources/limits set.

Lets say that memory limit per replica is 2GB. How do I properly configure apache to respect this limit?

I can modify maximum process count and/or maximum memory per process to prevent memory overflow (thus the replicas will not be killed because of OOM). But this does create new problem, this settings will also limit maximum number of requests that my replica could handle. In case of DDOS attack (or just more traffic) the bottleneck could be the maximum process limit, not memory/cpu limit. I think that this could happen pretty often, as these limits are set to worst case scenario, not based on average traffic.

I want to configure autoscaler so that it will create multiple replicas in case of such event, not only when the cpu/memory usage is near limit.

How do I properly solve this problem? Thanks for help!

-- Jan Imrich
apache
google-cloud-platform
kubernetes

1 Answer

4/16/2018

I would recommend doing the following instead of trying to configuring apache to limit itself internally:

  • Enforce resource limits on pods. i.e let them OOM. (but see NOTE*)
  • Define an autoscaling metric for your deployment based on your load.
  • Setup a namespace wide resource-quota. This enforeces a clusterwide limit on the resources pods in that namespace can use.

This way you can let your Apache+PHP pods handle as many requests as possible until they OOM, at which point they respawn and join the pool again, which is fine* (because hopefully they're stateless) and at no point does your over all resource utilization exceed the resource limits (quotas) enforced on the namespace.


* NOTE: This is only true if you're not doing fancy stuff like websockets or stream based HTTP, in which case an OOMing Apache instance takes down other clients that are holding an open socket to the instance. If you want, you should always be able to enforce limits on apache in terms of the number of threads/processes it runs anyway, but it's best not to unless you have solid need for it. With this kind of setup, no matter what you do, you'll not be able to evade DDoS attacks of large magnitudes. You're either doing to have broken sockets (in the case of OOM) or request timeouts (not enough threads to handle requests). You'd need far more sophisticated networking/filtering gear to prevent "good" traffic from taking a hit.

-- ffledgling
Source: StackOverflow