I have two API that are publicly exposed, let's say xyz.com/apiA and xyz.com/apiB.
Both these API are running node and are dockerized services running as individual pods in the same namespace of a Kubernetes cluster.
Now, apiA calls apiB internally as part of its code logic. The apiA service makes a POST call to apiB and sends with it a somewhat large payload in its body parameter. This POST request times out if the payload in its body is more than 30kb.
We have checked the server logs and that POST request is not seen.
The error prompt shows connection timeout to 20.xx.xx.xx which is the public ip address of xyz.com
I'm new to Kubernetes and would appreciate your help.
So far have tried this, but it didn't help.
Please let me know if more information is needed.
Edit: kubectl client and server version is 1.22.0
To update the kind folks who took time to understand the problem and suggest solutions - The issue was due to bad routing. Internal APIs (apiB in the example above) should not be called using full domain name of xyz.com/apiB, rather they can be directly referenced using pod name as
http://pod_name.namespace.svc.local/apiB
.
This will ensure internal calls are routed through Kubernetes DNS and don't have to go through load balancer and nginx, thus improving response time heavily.
Every call made to apiA was creating a domino effect by populating hundreds of calls to apiB and overloading the server, which caused it to fail only after a few thousand requests.
Lesson learned: Route all internal calls using cluster's internal network.