How to test if NGINX ingress controller of K8S cluster is working correctly?

8/18/2020
  • Type of cluster: bare-metal cluster based on OpenNebula
  • Specs: 4 worker nodes, 8 CPUs per worker node, 32GB Memory/RAM per worker node

I am trying to set up an NGINX ingress controller for my cluster using the command below:

[root@onekube-ip-193-144-35-177 ~]# kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.34.1/deploy/static/provider/baremetal/deploy.yaml

Which gives me this output

namespace/ingress-nginx unchanged serviceaccount/ingress-nginx unchanged configmap/ingress-nginx-controller configured clusterrole.rbac.authorization.k8s.io/ingress-nginx unchanged clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx unchanged role.rbac.authorization.k8s.io/ingress-nginx unchanged rolebinding.rbac.authorization.k8s.io/ingress-nginx unchanged service/ingress-nginx-controller-admission unchanged service/ingress-nginx-controller unchanged deployment.apps/ingress-nginx-controller created validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission configured clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission unchanged clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission unchanged job.batch/ingress-nginx-admission-create unchanged job.batch/ingress-nginx-admission-patch unchanged role.rbac.authorization.k8s.io/ingress-nginx-admission unchanged rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission unchanged serviceaccount/ingress-nginx-admission unchanged

Then I edit the ingress-nginx-controller service with the command:

kubectl edit svc -n ingress-nginx ingress-nginx-controller

And I add the external IP of the K8S cluster under externalIPs, under spec:

[...]
spec:
 clusterIP: 10.99.1.223
 externalIPs:
 - 193.144.35.177
 externalTrafficPolicy: Cluster
 ports:
[...]

In order to test that the NGINX ingress controller is working, I should now be able to browse a subdomain (e.g. prometheus.grapevine-project.eu) that points to the IP address of the K8S cluster (as confirmed by DNS Lookup), and the URL should contain a "404 Not Found page" returned by the NGINX ingress controller, if the controller has indeed been set up correctly. However, I am currently getting a "This site can’t be reached prometheus.grapevine-project.eu took too long to respond." page on my Internet browser.

Is this the right/recommended way for testing that the NGINX is working correctly? Could there be any issues with my set-up of the NGINX ingress controller?

PS

[root@onekube-ip-193-144-35-177 ~]# kubectl get svc -n ingress-nginx ingress-nginx-controller -o wide
NAME                       TYPE       CLUSTER-IP       EXTERNAL-IP      PORT(S)                      AGE   SELECTOR
ingress-nginx-controller   NodePort   10.105.197.205   193.144.35.177   80:30498/TCP,443:30781/TCP   14d   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
-- Paolo Marangio
bare-metal-server
kubernetes
kubernetes-ingress
nginx
nginx-ingress

2 Answers

8/19/2020

Not able to troubleshoot it via comments, let's do that via Answers? I'll be editing this post upon progress.

From https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.34.1/deploy/static/provider/baremetal/deploy.yaml

We can see a type: NodePort service name: ingress-nginx-controller.

That means that you'll have something like:

ubectl get svc ingress-nginx-controller -n ingress-nginx
NAME                       TYPE       CLUSTER-IP    EXTERNAL-IP      PORT(S)                      AGE
ingress-nginx-controller   NodePort   10.99.1.223   193.144.35.177   80:30498/TCP,443:30781/TCP   8m40s

I can see that exactly 2 ports are opened in NodePort range on your host.

PORT      STATE    SERVICE
30498/tcp filtered unknown
30781/tcp filtered unknown

please check if you are able to access the app from inside the cluster via CLUSTER-IP:80 and CLUSTER-IP:443

UPDATE:

I have just reproduced your setup and in my case it works perfectly.

$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.34.1/deploy/static/provider/baremetal/deploy.yaml


$ kubectl get all -n ingress-nginx

NAME                                            READY   STATUS      RESTARTS   AGE
pod/ingress-nginx-admission-create-rh2b4        0/1     Completed   0          82m
pod/ingress-nginx-admission-patch-l7ttw         0/1     Completed   0          82m
pod/ingress-nginx-controller-547b58f6cb-whrck   1/1     Running     0          82m

NAME                                         TYPE        CLUSTER-IP    EXTERNAL-IP     PORT(S)                      AGE
service/ingress-nginx-controller             NodePort    10.0.12.124   <none>          80:31691/TCP,443:30114/TCP   82m
service/ingress-nginx-controller-admission   ClusterIP   10.0.1.61     <none>          443/TCP                      82m

NAME                                       READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/ingress-nginx-controller   1/1     1            1           82m

NAME                                                  DESIRED   CURRENT   READY   AGE
replicaset.apps/ingress-nginx-controller-547b58f6cb   1         1         1       82m

NAME                                       COMPLETIONS   DURATION   AGE
job.batch/ingress-nginx-admission-create   1/1           6s         82m
job.batch/ingress-nginx-admission-patch    1/1           7s         82m

$ kubectl -n ingress-nginx get ep
NAME                                 ENDPOINTS                      AGE
ingress-nginx-controller             10.52.0.49:80,10.52.0.49:443   82m
ingress-nginx-controller-admission   10.52.0.49:8443                82m

Even without editing service, I was able to send requests from my local pc to K8s cluster (my firewall permits me to do that).

$ curl K8S_node_IP:31691
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx/1.19.1</center>
</body>
</html>

$ curl K8S_node_IP:30114
<html>
<head><title>400 The plain HTTP request was sent to HTTPS port</title></head>
<body>
<center><h1>400 Bad Request</h1></center>
<center>The plain HTTP request was sent to HTTPS port</center>
<hr><center>nginx/1.19.1</center>
</body>
</html>

The ingress-nginx-controller is of NodePort type, so in my case, there is no need to edit it because I already know my K8s_node_IP.

In order to troubleshoot the issue you have decribed (Timeout from server) it is possible to do the following:

  • Check if the issue is related to nginx-ingress or not. For that I already have minimal container that can be easily deployed via kubectl cli.
$ kubectl create deployment server-gog -n ingress-nginx --image=nkolchenko/enea:server_go_latest
deployment.apps/server-gog created

$ kubectl expose -n ingress-nginx deployment server-gog --type=NodePort --port=8180 --selector=app=server-gog

$ kubectl get svc -o wide -n ingress-nginx server-gog
NAME         TYPE       CLUSTER-IP    EXTERNAL-IP   PORT(S)          AGE   SELECTOR
server-gog   NodePort   10.0.10.254   <none>        8180:32068/TCP   76s   app=server-gog

### our app is avaliable at K8S_node_IP:32068 and 10.0.10.254:8180

$ curl K8S_node_IP:32068/some_string
Hello from ServerGo. You requested: /some_string

if the above does work, then the issue is within your ingress-nginx setup. And it is needed to check it per component.

-- Nick
Source: StackOverflow

8/18/2020

I'll try to answer both your questions but that might leave you with more work to be done before your ingress controller is fully operational

  1. Are you testing your Nginx ingress controller the right way? I would say no. The best way to test the controller is by creating an ingress object routing traffic to a simple service like echoserver and making sure that traffic is routed to that ingress as expected. You are correct that the http request you've performed should've returned 404 but even if you achieve that there might still be issues you'd miss before completing the entire loop (stuff like SSL termination are the most obvious pitfalls but there are more)
  2. It seems like there are some issues with your set up indeed. What is the IP you are using as an external IP? Why do you expect it to route traffic into your cluster?
-- Yaron Idan
Source: StackOverflow