Specifically, why do I end up with two external IP addresses when I follow the directions on Google's website for setting up nginx ingress on GKE?
The two IP addresses are for an Ingress resource and a Service resource of type LoadBalancer:
> kubectl get ingress
NAME HOSTS ADDRESS PORTS AGE
nginx-ingress example.com 1.1.1.1 80, 443 1d
> kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-app ClusterIP 10.31.251.77 <none> 8080/TCP 1d
kubernetes ClusterIP 10.31.240.1 <none> 443/TCP 1d
nginx-ingress-controller LoadBalancer 10.31.246.62 2.2.2.2 80:32603/TCP,443:31763/TCP 1d
nginx-ingress-default-backend ClusterIP 10.31.241.48 <none> 80/TCP 1d
Here is how I thought it works:
User
^
|
Service resource of type LoadBalancer <-- Ingress annotated as class nginx
^
|
Pod resource with Nginx acting as ingress controller
^
|
Service resource of type ClusterIP
^
|
Pod resource with server serving message at /hello
This is basically the diagram on the tutorial page I linked to. So I expect the load balancer to be of L4 type and have an external IP (and not cost any money to use!). And I expect the Ingress (despite its name) not to have an external IP, because I mark it with the annotation
annotations:
kubernetes.io/ingress.class: nginx
which Google is supposed to recognize as saying I do not want the Ingress resource to use their paid L7 HTTP Load Balancer but my own Nginx controller.
I do notice that my /hello
page is accessible via the load balancer's IP address, but accessing the ingress' address gives a connection attempt refused error. However it is the Ingress resource which has host:
and tls:
settings. So which resource do I associate my TLS certificate with? And why does the Ingress resource specify a domain name when it is the LoadBalancer IP at which my website is accessible?
I do not really understand your question, I believe you are a bit confused regarding the ingress resource.
Let me explain a bit, after you run in the tutorial:
helm install --name nginx-ingress stable/nginx-ingress --set rbac.create=true
kubectl apply -f ingress-resource.yaml
You will have the following situation:
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-ingress-controller LoadBalancer 10.11.245.77 external-ip-ONE 80:32172/TCP,443:31908/TCP 12m
$ kubectl get ingress
NAME HOSTS ADDRESS PORTS AGE
ingress-resource * external-ip-TWO 80 1m
Checking the external IP in use you will notice that:
external-ip-ONE - corresponds to the forwarding rule, therefore it is the IP you will see in the Load Balancer page
external-ip-TWO - corresponds to the same IP of the virtual machine where the POD ingress controller
it is running
Therefore no extra IP is "wasted". Basically you connect to the ingress controller that redirects the traffics according to the specification of the ingress resources to the different backends.