I have a kind kubernetes cluster running on a ubuntu VM, I have created this cluster following the documentation for enabling kubernetes ingress functionality.
This cluster has several services running inside of it, of these services there are 3 which I want to expose externally. One of these is a REST based service, the others use websocket connections.
On the host VM, after following the docs and some fiddling, I can access these services by curl'ing localhost.
I now want to expose these services via a specific interface (ens160) so that I can hit these services with some client-side automation another team has been building out.
My first attempt was to use IP tables to map traffic coming in on 80/443 to 127.0.0.1, and this works well with the REST service.
sudo iptables -t nat -A PREROUTING -p tcp -i ens160 --dport 443 -j DNAT --to-destination 127.0.0.1:443
sudo iptables -A FORWARD -p tcp -d 127.0.0.1 --dport 443 -m state --state NEW,ESTABLISHED,RELATED -j ACCEPT
sudo iptables -t nat -A PREROUTING -p tcp -i ens160 --dport 80 -j DNAT --to-destination 127.0.0.1:80
sudo iptables -A FORWARD -p tcp -d 127.0.0.1 --dport 80 -m state --state NEW,ESTABLISHED,RELATED -j ACCEPT
However the websocket connections are not establishing.
For me this method seems like a 'flimsy' approach and I am wondering if there is a better way to expose this kubernetes cluster to my 'corp network' then performing DNAT on packets coming in.
Am I going about this the wrong way?
Thanks, Max Sargent
I have resolved the issue, it was not related to what I had done.
By viewing the traffic I could see the connections were failing due to:
HTTP 15020: SOCKET ERROR: self signed certificate Error: self signed certificate
at TLSSocket.onConnectSecure (node:_tls_wrap:1530:34)
at TLSSocket.emit (node:events:394:28)
at TLSSocket._finishInit (node:_tls_wrap:944:8)
at TLSWrap.ssl.onhandshakedone (node:_tls_wrap:725:12)
I resolved this by setting:
rejectUnauthorized: false
In socket.io
Please close as I have resolved it myself.
Thanks, Max Sargent