Kubernetes - How to plan for autoscaling using resource management?

6/12/2017

Consider the following cluster running on Google Container Engine:

tier:         g1-small
cpu cores:    2
memory:       1,7GB per cpu core (3,4GB total in this case)
autoscaling:  enabled, min=2, max=5

On this cluster, I have the following Deployments running via Kubernetes:

  1. Load Balancer using NGINX
  2. Web App using Node.js (communicating with WordPress via REST calls)
    • example.com
  3. CMS using WordPress on Apache (wp.example.com)
    • wp.example.com

For clarity, every request goes through the NGINX Load Balancer first, then, depending on the subdomain, to either the CMS or the Web App.

I'd like to have more control over how much resources each Deployment consumes in order to consume resources more efficiently by applying Kubernetes Limit Ranges to my Pods/Containers resources.

Limits can be set on CPU and Memory. These are well explained in the docs. So far, so good.

The problem I'm having is to figure out what limits to apply per Deployment.

Example

The WordPress Deployment contains two containers in the deployment.yaml file. One for the WordPress image itself, one for the Cloud SQL Proxy Container that is needed for WordPress to connect to my Cloud SQL Database. How would I know what each container needs with respect to CPU/Memory resources?

Furthermore, considering that all HTTP/HTTPS traffic hits my NGINX Load Balancer first, an educated guess would be apply relatively more resources to the NGINX Deployment than to my CMS and Web App Deployment, right?!.

So is there a way to better estimate how many resources each Deployment would need?

Any help is greatly appreciated!

-- Nicky
autoscaling
google-compute-engine
google-kubernetes-engine
kubernetes
nginx

1 Answer

7/17/2017

k8s' default value to pods is 100m CPU request and no CPU limit, and no memory request/limit. If you don't set limitation pods/containers will consume as much as it need. Which is pretty convenient since usually you don't specify limitation one by one.

Nginx as a load balancer is pretty light-weighted. So it's hard to say which one needs more resources. I would follow the default at the beginning then use kubectl top pod to check CPU/memory pressure for tuning reference.

-- Ken Chen
Source: StackOverflow