AWS + Kubeadm (k8s 1.4) I tried following the README at:
https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx
but that doesnt seem to work. I asked around in slack, and it seems the yamls are out-dated, which i had to modify as such
first i deployed default-http-backend using yaml found on git:
Next, the ingress-RC i had to modify:
(note the change to get path to healthz
to reflect default-backend
as well as the port change to 10254
which is apparently needed according to slack)
Everything is running fine kubectl get pods
i see the ingress-controller kubectl get rc
i see 1 1 1 for the ingress-rc
i then deploy the simple echoheaders application (according to git readme):
kubectl run echoheaders --image=gcr.io/google_containers/echoserver:1.4 --replicas=1 --port=8080
kubectl expose deployment echoheaders --port=80 --target-port=8080 --name=echoheaders-x
next i created a simple ingress :
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
spec:
backend:
serviceName: echoheaders-x
servicePort: 80
both get ing
and describe ing
gives be a good sign:
Name: test-ingress
Namespace: default
Address: 172.30.2.86 <---this is my private ip
Default backend: echoheaders-x:80 (10.38.0.2:8080)
Rules:
Host Path Backends
---- ---- --------
* * echoheaders-x:80 (10.38.0.2:8080)
but attempting to go to nodes public ip doesnt seem to work, as i am getting "unable to reach server`
@nate's answer is right
https://kubernetes.github.io/ingress-nginx/deploy/baremetal/#over-a-nodeport-service
has a bit more details.
They do not recommend setting the service's node port range though
Unfortunately it seems that using ingress controllers with Kubernetes clusters set up using kubeadm doesn't is not supported at the moment.
The reason for this is that the ingress controllers specify a hostPort in order to become available on the public IP of the node, but the cluster created by kubeadm uses the CNI network plugin which does not support hostPort at the moment.
You may have better luck picking a different way to set up the cluster which does not use CNI.
Alternatively, you can edit your ingress-rc.yaml to declare "hostNetwork: true" under the "spec:" section. Specifying hostNetwork will cause the containers to run using the host's network namespace, giving them access to the network interfaces, routing tables and iptables rules of the host. Think of this as equivalent to "docker run" with the option --network="host".
Ok for all those that came here wondering the same thing..here is how i solved it.
PRECURSOR: the documentation is ambiguous such that reading the docs, i was under the impression, that running through the README would allow me to visit http://{MY_MASTER_IP} and get to my services...this is not true.
in order to get ingress_controller, I had to create a service for ingress-controller, and then expose that service via nodePort
. this allowed me to access the services (in the case of README, echoheaders
) via http://{MASTER_IP}: {NODEPORT}
there is an "issue" with nodePort that you get a random port#, which somewhat defeats the purpose of ingress... to solve that i did the following:
First: I needed to edit kube-api to allow a lower nodePort IP.
vi /etc/kubernetes/manifests/kube-apiserver.json
then in the kube-api containers arguments section add: "--service-node-port-range=80-32767",
this will allow nodePort to be from 80-32767.
** NOTE: i would probably not recommend this for production...**
Next, i did kubectl edit svc nginx-ingress-controller
and manually edited nodePort to port 80.
this way, i can go to {MY_MASTER_IP} and get to echoheaders.
now what i am able to do is, have different Domains pointed to {MY_MASTER_IP} and based on host (similar to README)
you can just use the image nginxdemos/nginx-ingress:0.3.1 ,you need not build yourself