I'm following the instructions here to spin up a single node master kubernetes install. And then planning to make a website hosted within it available via an nginx ingress controller hosted directly on the internet (on a physical server, not GCE, AWS or other cloud).
Set-up works as expected and I can hit the load balancer and flow through the ingress to the target echoheaders instance, get my output and everything looks great. Good stuff.
The trouble comes when I portscan the server's public internet IP and see all these open ports besides the ingress port (80).
Open TCP Port: 80 http
Open TCP Port: 4194
Open TCP Port: 6443
Open TCP Port: 8081
Open TCP Port: 10250
Open TCP Port: 10251
Open TCP Port: 10252
Open TCP Port: 10255
Open TCP Port: 38654
Open TCP Port: 38700
Open TCP Port: 39055
Open TCP Port: 39056
Open TCP Port: 44667
All of the extra ports correspond to cadvisor, skydns and the various echo headers and nginx instances, which for security reasons should not be bound to the public IP address of the server. All of these are being injected into the host's KUBE-PORTALS-HOST iptable with bindings to the server's public IP by kube-proxy.
How can I get hypercube to tell kube-proxy to only bind to docker IP (172.x) or private cluster IP (10.x) addresses?
You should be able to set the bind address on kube-proxy (http://kubernetes.io/docs/admin/kube-proxy/):
--bind-address=0.0.0.0: The IP address for the proxy server to serve on (set to 0.0.0.0 for all interfaces)