I'm testing a simple app on a local k8s cluster composed of a React FE and a Spring Boot BE. The cluster is running inside docker desktop for windows with version 1.14.8 (docker desktop 2.1.0.5).
My problem is that the configured ingress service seems to be unable to route traffic the the BE deployment, while the FE one works great (I can actually see the react app in the browser, but the rest calls to the BE fail). I've tried different solutions but I cannot understand what's wrong with my configuration.
The FE image exposes port 3000 and the the BE one exposes port 8080 (with root path /apptest), running the images with docker run both work as expected, answering requests on those ports.
For the k8s configuration I've defined a deployment for both images, with containerPort 3000 for the FE and containerPort 8080 for the BE. I then created two ClusterIP services, one for the FE with port 3000 and targetPort 3000, and one for the BE with port 8080 and targetPort 8080.
The ingress service is configured to answer any request with path / to the servicePort 3000 (the FE) and any requests starting with /api to the servicePort 8080 (BE, in this case removing the 'api' bit). The FE is configured to start backend calls with a /api path.
When applying the files on the k8s cluster it all start up correctly with no errors inside the pods, and I can visit the react app on http://localhost. But if I try to make calls to the backend with url http://localhost/api/apptest they fail giving a 502 Bad Gateway error.
FE Dockerfile
FROM node:12-alpine as builder
WORKDIR /app
COPY ./package*.json ./
RUN npm install
COPY . .
RUN npm run build
FROM nginx
EXPOSE 3000
COPY ./nginx/default.conf /etc/nginx/conf.d/default.conf
COPY --from=builder /app/build /usr/share/nginx/html
FE Nginx config:
server {
listen 3000;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
}
BE Dockerfile:
FROM java:8
VOLUME /tmp
ARG JAR_FILE
ADD ${JAR_FILE} app.jar
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]
FE Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: apptest-fe-deployment
spec:
replicas: 1
selector:
matchLabels:
component: apptest-fe # uguale al template specificato sotto
template:
metadata:
labels:
component: apptest-fe
spec:
containers:
- name: apptest-fe
imagePullPolicy: Always
image: registryipaddress:5000/apptestgroup/apptest-fe:latest
resources:
limits:
memory: "128Mi"
cpu: "10m"
ports:
- containerPort: 3000
FE ClusterIP:
apiVersion: v1
kind: Service
metadata:
name: apptest-fe-cluster-ip
spec:
type: ClusterIP
selector:
component: apptest-fe
ports:
- port: 3000
targetPort: 3000
BE Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: apptest-deployment
spec:
replicas: 1
selector:
matchLabels:
component: apptest # uguale al template specificato sotto
template:
metadata:
labels:
component: apptest
spec:
containers:
- name: apptest
imagePullPolicy: Always
# Di default kubernetes va su docker hub a recuperare l'immagine.
# Se il tag dell'immagine inizia con un indirizzo ip, lo interpreta come
# il registro da cui pullare l'immagine.
image: registryipaddress:5000/apptestgroup/apptest:latest
resources:
limits:
memory: "128Mi"
cpu: "10m"
ports:
- containerPort: 8080
BE ClusterIP:
apiVersion: v1
kind: Service
metadata:
name: apptest-cluster-ip
spec:
type: ClusterIP
selector:
component: apptest
ports:
- port: 8080
targetPort: 8080
Ingress service:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: apptest-ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- http:
paths:
- path: /?(.*)
backend:
serviceName: apptest-fe-cluster-ip
servicePort: 3000
- path: /api/?(.*)
backend:
serviceName: apptest-cluster-ip
servicePort: 8080
The 502 Error from Chrome:
xhr.js:172 POST http://localhost/api/apptest/documents/base64/aaa333 502 (Bad Gateway)...
createError.js:16 Uncaught (in promise) Error: Request failed with status code 502
at e.exports (createError.js:16)
at e.exports (settle.js:17)
at XMLHttpRequest.f.onreadystatechange (xhr.js:59)
I'm kinda conviced the problem is the nginx inside the FE container that serves the ReactApp, wich somehow bypasses the ingress service and tries to route traffic to a path that it doesn't know, but I'm not sure how to find a workaround to it.
UPDATE
I've tried to map the FE to /app in the ingress service, so to check if the problem was the nginx inside the container. Navigating to http://localhost/app the react app works, even if not fully, but trying with postman to contact http://localhost/api/apptest still gives the 502 error
It turns out the problem was the resources allocation to the backend app. I had set them very low because on my pc the pod wouldn't startup otherwise. I added just a little bit more resources to the deployment config and now it all works as expected.
What made me think the problem was something else is that even with low resources the pod would still startup normally (and stay up), and wouldn't report any problems inside the container, even though the tomcat inside the spring boot app wasn't clearly functioning correctly.
It's not an Ingress Issue, I've reproduced your scenario and your syntax is correct.
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: test-ingress-service
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: hello-world.info
http:
paths:
- path: /?(.*)
backend:
serviceName: web
servicePort: 8080
- path: /api/?(.*)
backend:
serviceName: webv2
servicePort: 8080
Test output:
user@minikube:~$ curl http://hello-world.info/aaa.any
Hello, world! Version: 1.0.0
Hostname: web-9bbd7b488-hlxd4
user@minikube:~$ curl http://hello-world.info/api/bbb.any
Hello, world! Version: 2.0.0
Hostname: web2-74cf4946cc-8c586
user@minikube:~$ curl http://hello-world.info/
Hello, world! Version: 1.0.0
Hostname: web-9bbd7b488-hlxd4
user@minikube:~$ curl http://hello-world.info/api/
Hello, world! Version: 2.0.0
Hostname: web2-74cf4946cc-8c586
Github Ingress-nginx Docs - Ingress Path Matching:
In order to enable more accurate path matching, ingress-nginx first orders the paths by descending length before writing them to the NGINX template as location blocks
Your last update made clear that the problem is in your Backend App, since it's returning the same 502 Bad Gateway
. Please review it thoroughly.
The first ingress rule will match every time, requests won't get to the BE. Check how the location blocks are generated: https://kubernetes.github.io/ingress-nginx/user-guide/ingress-path-matching/
You will end up with the following inside nginx:
location ~* ^/?(.*) {
...
}
location ~* "^/api/?(.*)" {
...
}
As an advice for when you have ingress issues, always check the logs to see which service gets the requests. And access http://localhost/api/apptest/documents/base64/aaa333 directly in the browser while debugging, it's less error prone than going through the frontend.