I am deploying a Backend and a Frontend in a local Docker-Desktop Kubernetes Cluster. The Frontend is reachable via app.local
the Backend is reachable via app.local/backend
.
app.local is defined in my hosts file on my Macbook like so:
127.0.0.1 app.local
In my Browser I can reach both addresses, however the Frontend is making an internal REST call from the Container to the Backend via app.local/backend, which works on the actual remote Kubernetes cluster. (I know that this is not ideal and the call should be made to the Kubernetes Service instead, but lets keep this scenario for now)
A wget from inside the container produces the following:
wget http://app.local/backend
--2021-05-20 16:50:05-- http://app.local/backend
Resolving app.local (app.local)... 127.0.0.1
Connecting to app.local (app.local)|127.0.0.1|:80... failed: Connection refused.
It seems that the Frontend inherits the hosts entry from my Macbook, as app.local is correctly resolved to 127.0.0.1, however 127.0.0.1 in this case is the Frontend container itself I guess, which is why the Backend is not reached in this case. Is this assumption correct?
Conclusion: On my actual cluster this works without a problem, which is probably because the DNS entry for my actual domain is resolved to the correct ip-address of the public server.
Is there any way I can solve this elegantly, or is the problem lying elsewhere and I am making a wrong assumption?