I do have setup a CoreOS single-node cluster and can access the API from my client without any issues. I did follow the guide. All deployed pods are running fine, and I can access those on the node without problem.
Now I am trying to establish the connection from the outside world using Nginx-Ingress. The service/endpoint is in nginx (figured out via the flag --v=2), but unfortunatly not attached to the node IP. kubectl get ing
shows the node IP, but the port(s) are not attached to the node, and therefor I cannot access the endpoint/service from the outside world.
I tried to specify type: NodePort
on the corresponding services, but this does not seem to work. Furthermore the setting hostNetwork: true
on the ingress-controller RC, but this is not working as well.
-- Update Start --
Just tried to use a service in front of the ingress controller RC. This service is of type NodePort. Still the problem exists, the ports are not open on the node, but in the cluster IP range as well as in the Service IP range.
Does this hint to a network problem? IPTables? unsure.
-- Update End --
So, how can I access my services from the outside world on this single-node cluster? Any help is appreciated.
-- Update --
Added the --hostname-override=EXTERNAL-IP
to the kube-proxy yaml
, the service (10.3.0.186) is reachable, the endpoint (10.2.9.8:8080) is reachable from the node, the following iptables entries are present:
-A KUBE-SEP-VN7N5XLSTJBPYG7C -s 10.2.9.8/32 -m comment --comment "default/echoheaders:http" -j KUBE-MARK-MASQ
-A KUBE-SEP-VN7N5XLSTJBPYG7C -p tcp -m comment --comment "default/echoheaders:http" -m tcp -j DNAT --to-destination 10.2.9.8:8080
-A KUBE-SERVICES -d 10.3.0.186/32 -p tcp -m comment --comment "default/echoheaders:http cluster IP" -m tcp --dport 80 -j KUBE-SVC-P5R2U4YB6QJZHBET
-A KUBE-SVC-P5R2U4YB6QJZHBET -m comment --comment "default/echoheaders:http" -j KUBE-SEP-VN7N5XLSTJBPYG7C
-- Update end --