We are running an API server on GKE (google kubernetes engine). We handle our authorization using Google Cloud Endpoints and API keys. We whitelist certain IP addresses on every API key. In order to make this work we had to change over from a loadbalancer to a ingress controller for exposing our API server. The IP whitelisting does not work with the loadbalancer service. Now we have an ingress setup similar to this:
apiVersion: v1
kind: Service
metadata:
name: echo-app-nodeport
spec:
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
selector:
app: esp-echo
type: NodePort
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: echo-app-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: "RESERVED_IP"
kubernetes.io/ingress.allow-http: "false"
spec:
tls:
- secretName: SECRET_NAME
backend:
serviceName: echo-app-nodeport
servicePort: 80
This setup functions fine and the IP whitelisting works. Now my concern lies primarily with the NodePort that seems needed in order to make the ingress controller work. I read multiple sources [1][2] that strongly advise against using NodePorts for exposing your application. Yet most examples I find use this NodePort + Ingress combo. Can we safely use this setup or should we migrate towards an other ingress controller (NGINX, Traefik,..) ?
My suspicion is that the GKE ingress is actually outside of your GKE cluster and forwards the traffic to your cluster over the Nodeport. This is probably why the set-up of GKE ingress and services exposed over ClusterIP doesn't work.
If you deploy an NGINX Ingress Controller on your GKE cluster, it will create an ingress gateway from within your cluster (instead of forwarding to your cluster) and be able to communicate to services exposed over ClusterIP.
You can have only ClusterIP type service for all your workload pods and have one LoadBalancer service to expose the ingress controller itself outside the cluster.That way you can completely avoid NodePort service.