Stuck nginx ingress

6/20/2021

I deployed nginx ingress by kubespray. I have 3 masters and 2 workers and 5 ingress-nginx-controller. I tried to shutdown one worker and now I see still 5 nginx ingress on all hosts.

[root@node1 ~]# kubectl get pod -n ingress-nginx -o wide
NAME                             READY   STATUS    RESTARTS   AGE     IP              NODE    NOMINATED NODE   READINESS GATES
ingress-nginx-controller-5828c   1/1     Running   0          7m4s    10.233.96.9     node2   <none>           <none>
ingress-nginx-controller-h5zzl   1/1     Running   0          7m42s   10.233.92.7     node3   <none>           <none>
ingress-nginx-controller-wrvv6   1/1     Running   0          6m11s   10.233.90.17    node1   <none>           <none>
ingress-nginx-controller-xdkrx   1/1     Running   0          5m44s   10.233.105.25   node4   <none>           <none>
ingress-nginx-controller-xgpn2   1/1     Running   0          6m38s   10.233.70.32    node5   <none>           <none>

The problem is I am getting 503 error with app after one node was power off. Is some option disconnect not working ingress-nginx-controller or possibility to use round robin, please? Or could I catch non working ingress-nginx-controller and redirect traffic to correct one, please?

-- Martin Smola
kubernetes
nginx-ingress

1 Answer

6/20/2021

I shutdown the node where the app was running. Now is everything working.

-- Martin Smola
Source: StackOverflow