So let's say I have 2 micro services that communicate with each other - MS1 and MS2.
So for this example MS1 has 150 requests to get from MS2, I'm using Promise.all to send them.
For every request from MS1, MS2 does an axios.get call to another server out of my control. This server reaction time is incredibly slow - like up to 10 minutes per call.
So while waiting for this other server, MS2 is accepting the next request.
So, I have 150 requests, and everytime we get to the point where we wait for the slow server, I'm guessing the API starts handling the next request.
My server also has a /health endpoint for kubernetes, which while getting all these requests, MS2 stops responding or responds very slowly to /health, which causes kubernetes to terminate and re-start my pod.
Is this because the excessive amount of requests to MS2 blocks the event loop? or am I missing something else here?
Adding code: MS1: API
const express = require('express');
const router = express.Router();
var axios = require('axios')
var test = require('./test')
module.exports = router.get('/placeHolder', async (req, res, next) => {
console.log("im working");
// test();
// console.log("test done");
try {
var x = await test();
res.status(200);
res.json({
test: x
});
}
catch (err) {
res.status(500)
res.json(err)
}
});
MS test.js:
var axios = require('axios')
module.exports = async () => {
try {
var x = await axios.get("http://localhost:8096/placeholder_services/placeholder2")
return x
}
catch (err) {
console.log(err.message);
return err
}
}
MS2 API:
const express = require('express');
const router = express.Router();
module.exports = router.get('/placeHolder2', async (req, res, next) => {
console.log("im working");
setTimeout(() => {
console.log("im done");
res.status(200);
res.json({
test: "ok"
});
}, "60000");
});
And lets say I send 150 requests through postman simultaneously
Thanks :)