Mule performance tuning with limited resources

11/21/2016

We are running our Mule3.8.0EE applications in a containerised environment, with one application per each container. From the container level, we are restricting the cpu/memory resource due to the limited physical host's resource pool and this API I'm referring at the peak can use only up to 1 cpu core and 512MB memory (managed by kubernetes)

The mule flow is rather simple - a http endpoint with APIKit, then a backend Redis call through redis connector to do a get by key operation, and dataweave for transform the data into json and return

The problem we detected is once there are load requests (100+ concurrent), with all default performance related configuration (strategy, threading etc), the application always triggers a JVM exit and then restart itself inside of the container

Question is given these limited resources, how should we tune it to at least not die but process these load slowly. Expecting suggestions for example: should we increase the thread pool or decrease? should we use synchronous proccessing strategy? etc

Update: We are using docker json logging driver, so the mule app outputs all its log to container's console and picked up there by docker, if we print a line of log per each request and once there is a load, I noticed that is also a concern, with this glibc/tanuki wrapper issue (which is considered fixed after mule3.8).

Thanks in advance

-- James Jiang
docker
kubernetes
mule

1 Answer

11/24/2016
-- goner
Source: StackOverflow