I have a kubernetes cluster which I am running tests on and I have setup an NGINX Ingress controller using this mage: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.11.0
As far as I so far understand Ingress Controllers, the setup seems to work as exepected except for that the ingress controller's default backend path is by default/
. My current issue with this is that I have an ingress to a service (Harbor) whose default path is also /
. Hence, I cannot get to the service and so I always get default backend 404 response. I have tried changing the Harbor service's ingress path to something other than /
but upon calling the changed path, Harbor returns a 200 page but with just a Loading...
string on it (I don't know if there is something hardcoded in Harbor that stops it liking any paths other than /
?).
My question then is, is it possible to change the default backend's default path to something other than /
? Or remove the default backend altogether? (I read online that it is not possible to remove the default backend). What options do I then have?
--- EDIT: Configurations Used ---
Ingress Conroller YAML:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-ingress-controller
namespace: ingress-nginx
spec:
replicas: 1
selector:
matchLabels:
app: ingress-nginx
template:
metadata:
labels:
app: ingress-nginx
annotations:
prometheus.io/port: '10254'
prometheus.io/scrape: 'true'
spec:
serviceAccountName: nginx-ingress-serviceaccount
containers:
- name: nginx-ingress-controller
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.11.0
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
- --default-ssl-certificate=$(POD_NAMESPACE)/default-tls-secret
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
- --annotations-prefix=nginx.ingress.kubernetes.io
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
Ingress YAML:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: harbor
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
tls:
- hosts:
- k8s-dp-2
rules:
- host: k8s-dp-2
http:
paths:
- path: /
backend:
serviceName: ui
servicePort: 80
- path: /v2
backend:
serviceName: registry
servicePort: repo
- path: /service
backend:
serviceName: ui
servicePort: 80
Service YAML:
apiVersion: v1
kind: Service
metadata:
name: ui
spec:
ports:
- port: 80
selector:
name: ui-apps
The setup was actually correct and the ingress routes to Harbor via nginx ingress controller are now working as expected.
To clarify, the k8s test cluster is running on Virtualbox VMs (Centos 7) hosted on a Windows 10 machine and when I tried the Harbor URL this morning again (after rebooting the host Windows 10 machine and Virtualbox VMs) the Harbor page started loading okay.
So the answer I am guessing was that a restart was needed (Not sure why however - I am not clear if there are cases after a k8s resource change is applied in which a k8s node needs to be restarted).