Given there is a pod (e.g. postgres) running in kubernetes, kubectl port-forward pod 15432:5432
is being used to expose the pod to host.
normally, it is accessible in host through running the postgres client: psql -h 127.0.0.1 -p 15432
, OR by accessing http://127.0.0.1:15432
, OR establishing TCP connection directly: echo > /dev/tcp/127.0.0.1/15432 && echo yes
. If the connection is established successfully, kubectl would prompt a message Handling connection for 15432
for verification.
However, accessing the port-forwarded pod using 127.0.0.1
/ 172.17.0.1
inside container is not possible, despite the flag --network=host
is being used or not. It is only accessible through host.docker.internal
This could be a problem exclusively happening to docker for mac. I haven't verified on linux yet.
Here are the logs when I run the connection test inside docker. It clearly shows that the TCP connection cannot be established.
$ docker run --network=host -it --rm postgres:12.4 /bin/bash
# inside container
# unsuccessful for establishing TCP connection to 127.0.0.1:15432
root@docker-desktop:/# echo > /dev/tcp/127.0.0.1/15432 && echo yes
bash: connect: Connection refused
bash: /dev/tcp/127.0.0.1/15432: Connection refused
# unsuccessful for establishing TCP connection to 172.17.0.1:15432
[root@docker-desktop /]# echo > /dev/tcp/172.17.0.1/15432 && echo yes
bash: connect: Connection refused
bash: /dev/tcp/172.17.0.1/15432: Connection refused
# no surprise: unsuccessful for psql 127.0.0.1
root@docker-desktop:/# psql -h 127.0.0.1 -p 15432
psql: error: could not connect to server: could not connect to server: Connection refused
Is the server running on host "127.0.0.1" and accepting
TCP/IP connections on port 15432?
# successful for host.docker.internal
root@docker-desktop:/# psql -h host.docker.internal -p 15432
Password for user root:
Here are some nslookup/ ifconfig logs that could be useful:
Server: 192.168.65.1
Address: 192.168.65.1#53
Non-authoritative answer:
Name: host.docker.internal
Address: 192.168.65.2
bash-5.0# ifconfig
eth0 Link encap:Ethernet HWaddr 02:42:AC:11:00:02
inet addr:172.17.0.2 Bcast:172.17.255.255 Mask:255.255.0.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:12 errors:0 dropped:0 overruns:0 frame:0
TX packets:3 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:984 (984.0 B) TX bytes:202 (202.0 B)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:1 errors:0 dropped:0 overruns:0 frame:0
TX packets:1 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:29 (29.0 B) TX bytes:29 (29.0 B)
Why there is a difference in connectivity between the docker container and host? How does host.docker.internal solve the problem under the hood? Is there other way to resolve the problem by providing docker run flags?
When using Docker for Mac or Windows a virtual machine is used to run your containers.
Even with --net=host
the container will not run on your desktop directly, but on the VM. 127.0.0.1 is therefore the VM IP, not the host IP. You can use, as you stated, host.docker.internal
on Mac / Windows to get the IP of the real machine.
On Linux the containers run without a VM and therefore directly on the real host.
You might want to investigate if telepresence could solve your use case in a more general and robust fashion: https://www.telepresence.io/