NGINX Ingress creates NodePort rather than LoadBalancer

8/1/2020

I am new to this containerization stuff. I am running minikube on Ubuntu 18.04 I am following installation from: https://kubernetes.github.io/ingress-nginx/deploy/ so simply executing minikube addons enable ingress when I execute kubectl get services -n ingress-nginx

it shows 
NAME                                 TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx-controller             NodePort    10.100.216.141   <none>        80:32205/TCP,443:31915/TCP   5d2h
ingress-nginx-controller-admission   ClusterIP   10.106.58.189    <none>        443/TCP                      5d2h

however based on course that I am following ingress-nginx-controller type should be load balancer.

My ingress config:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-service
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/use-regex: 'true'
spec:
  rules:
    - host: we-creators.dev
      http:
        paths:
          - path: /api/users/?(.*)
            backend:
              serviceName: auth-srv
              servicePort: 3000
          - path: /?(.*)
            backend:
              serviceName: client-srv
              servicePort: 3000
-- Mateusz Gebroski
kubernetes
kubernetes-ingress
nginx
ubuntu

1 Answer

8/6/2020

I have tested this with 2 latest releases and what can I see at the moment there is no service associated with ingress-nginx controller (deployed as minikube addon).

Please take a look at ports within ingress-nginx deployment.

Enabling ingress addon created deployment in kube-system namespace with this spec:

...
spec:
  containers:
  - args: 
    - --report-node-internal-ip-address
...
    ports:
    - cotainerPort: 80
      hostPort:80
      name: http
      protocol: TCP
    - cotainerPort: 443
      hostPort: 443
      name: https
      protocol: TCP
...

It looks that in current minikube release ingress-nginx is using --hostPorts instead of nginx ingress nodeport service. The most important information is if your current cni networking plugin support Port-mapping

From the official docs the CNI networking plugin supports hostPort

The CNI networking plugin supports hostPort.

You can examine this settings inside minikube node:

minikube -p your_name ssh
cat /etc/cni/net.d/your_config

"type": "portmap",
        "capabilities": {"portMappings": true},

    

Limitations are the same as for hostNetwork: true configuration:

One major limitation of this deployment approach is that only a single NGINX Ingress controller Pod may be scheduled on each cluster node, because binding the same port multiple times on the same network interface is technically impossible


-- report-node-internal-ip-address

because bare-metal nodes usually don't have an ExternalIP, one has to enable the --report-node-internal-ip-address flag, which sets the status of all Ingress objects to the internal IP address of all nodes running the NGINX Ingress controller.

Kubernetes ServiceTypes allow you to specify what kind of Service you want. The default is ClusterIP.

Type values and their behaviors are:

  • ClusterIP: Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. This is the default ServiceType.

  • NodePort: Exposes the Service on each Node's IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You'll be able to contact the NodePort Service, from outside the cluster, by requesting <NodeIP>:<NodePort>.

  • LoadBalancer: Exposes the Service externally using a cloud provider's load balancer. NodePort and ClusterIP Services, to which the external load balancer routes, are automatically created.

Special note for service type Loadbalancer

When the Service type is set to LoadBalancer, Kubernetes provides functionality equivalent to type equals ClusterIP to pods within the cluster and extends it by programming the (external to Kubernetes) load balancer with entries for the Kubernetes pods. The Kubernetes service controller automates the creation of the external load balancer, health checks (if needed), firewall rules (if needed) and retrieves the external IP allocated by the cloud provider and populates it in the service object

As workaround in your bare metal environment, you can use MetalLB

MetalLB provides a network load-balancer implementation for Kubernetes clusters that do not run on a supported cloud provider, effectively allowing the usage of LoadBalancer Services within any cluster.

-- Mark
Source: StackOverflow