I have a Kubernetes cluster deployed locally with Kubeadm/Vagrant with a master and two workers with the following IPs: master: 192.168.250.10 worker1: 192.168.250.11 worker2: 192.168.250.12
then I have an application composed by a ReactJS frontend and SpringBoot backend running in two separate containers on the same Pod. When I submit a form on the frontend the application calls an API in the backend that internally calls a Kubernetes API. To authenticate to the cluster I use a .kube/config file correctly configured.
When the application (frontend/backend) is outside the cluster everything works fine. I use docker-compose to startup the two containers just for the unit tests. The .kube/config file has as API URL https://192.168.250.10:6443. The problem is when I try to run the application in the containers the IP 192.168.250.10 doesn't work properly and communication goes in Timeout exception.
I am sure the application is OK because the same application works fine in IBM Cloud wherein .kube/config there is an API server with public IP reachable.
My question is, which IP should I put into .kube/config when I run the application locally inside my cluster? How can I get this IP using kubectl commands? Thanks in advance for any help.