app on path instead of root not working for Kubernetes Ingress

6/21/2020

I have an issue at work with K8s Ingress and I will use fake examples here to illustrate my point. Assume I have an app called Tweeta and my company is called ABC. My app currently sits on tweeta.abc.com. But we want to migrate our app to app.abc.com/tweeta.

My current ingress in K8s is as belows:

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: tweeta-ingress
spec:
  rules:
  - host: tweeta.abc.com
    http:
      paths:
      - path: /
        backend:
          serviceName: tweeta-frontend
          servicePort: 80
      - path: /api
        backend:
          serviceName: tweeta-backend
          servicePort: 80

For migration, I added a second ingress:

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: tweeta-ingress-v2
spec:
  rules:
  - host: app.abc.com
    http:
      paths:
      - path: /tweeta
        backend:
          serviceName: tweeta-frontend
          servicePort: 80
      - path: /tweeta/api
        backend:
          serviceName: tweeta-backend
          servicePort: 80

For sake of continuity, I would like to have 2 ingresses pointing to my services at the same time. When the new domain is ready and working, I would just need to tear down the old ingress.

However, I am not getting any luck with the new domain with this ingress. Is it because it is hosted on a path and the k8s ingress needs to host on root? Or is it a configuration I would need to do on the nginx side?

-- aijnij
kubernetes
kubernetes-ingress
nginx

2 Answers

7/10/2020

I assume your frontend Pod expects the path / and backend Pod expects the path /api

The first ingress config doesn't transform the request and it goes to the frontend(Fpod)/backend(Bpod) Pods as is:

http://tweeta.abc.com/     -> ingress -> svc -> Fpod: [ http://tweeta.abc.com/    ] 
http://tweeta.abc.com/api  -> ingress -> svc -> Bpod: [ http://tweeta.abc.com/api ] 

but with second ingress it doesn't work as expected:

http://app.abc.com/tweeta      -> ingress -> svc -> Fpod: [ http://app.abc.com/tweeta    ] 
http://app.abc.com/tweeta/api  -> ingress -> svc -> Bpod: [ http://app.abc.com/tweeta/api    ] 

The Pod request path is changed from / to /tweeta and from /api to /tweeta/api. I guess it's not the expected behavior. Usually application in Pods doesn't care about Host header but Path must be correct. If your Pods aren't designed to respond to additional tweeta\ path, they likely respond with 404 (Not Found) when the second ingress is used.

To fix it you have to add rewrite annotation to remove tweeta path from the Pods' request:

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: tweeta-ingress-v2
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
  rules:
  - host: app.abc.com
    http:
      paths:
      - path: /tweeta(/|$)(.*)
        backend:
          serviceName: tweeta-frontend
          servicePort: 80
      - path: /tweeta(/)(api$|api/.*)
        backend:
          serviceName: tweeta-backend
          servicePort: 80

The result will be as follows, which is exactly how it suppose to work:

http://app.abc.com/tweeta            -> ingress -> svc -> Fpod: [ http://app.abc.com/    ] 
http://app.abc.com/tweeta/blabla     -> ingress -> svc -> Fpod: [ http://app.abc.com/blabla    ] 

http://app.abc.com/tweeta/api        -> ingress -> svc -> Bpod: [ http://app.abc.com/api    ] 
http://app.abc.com/tweeta/api/blabla -> ingress -> svc -> Bpod: [ http://app.abc.com/api/blabla    ] 

To check ingress-controller logs and configuration use accordingly:

$ kubectl logs -n ingress-controller-namespace ingress-controller-pods-name

$ kubectl exec -it -n ingress-controller-namespace ingress-controller-pods-name -- cat /etc/nginx/nginx.conf > local-file-name.txt && less local-file-name.txt
-- VAS
Source: StackOverflow

6/22/2020

As far as I tried, I couldn't reproduce your problem. So I decided to describe how I tried to reproduce it, so you can follow the same steps and depending on where/if you fail, we can find what is causing the issue.

First of all, make sure you are using a NGINX Ingress as it's more powerful.

I installed my NGINX Ingress using Helm following these steps:

$ curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash
$ helm repo add stable https://kubernetes-charts.storage.googleapis.com
$ helm repo update
$ helm install nginx-ingress stable/nginx-ingress

For the deployment, we are going to use an example from here.

Deploy a hello, world app

  1. Create a Deployment using the following command:

    kubectl create deployment web --image=gcr.io/google-samples/hello-app:1.0

    Output:

    deployment.apps/web created
  2. Expose the Deployment:

    ```shell
    kubectl expose deployment web --type=NodePort --port=8080
    ```
    
    Output:
    
    ```shell
    service/web exposed
    ```
    

    Create Second Deployment

  3. Create a v2 Deployment using the following command:

    kubectl create deployment web2 --image=gcr.io/google-samples/hello-app:2.0

    Output:

    deployment.apps/web2 created
  4. Expose the Deployment:

    ```shell
    kubectl expose deployment web2 --port=8080 --type=NodePort
    ```
    
    Output:
    
    ```shell
    service/web2 exposed
    ```
    

    It this point we have the Deployments and Services running:

$ kubectl get deployments.apps 
NAME                            READY   UP-TO-DATE   AVAILABLE   AGE
web                             1/1     1            1           24m
web2                            1/1     1            1           22m
$ kubectl get service
NAME                            TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
kubernetes                      ClusterIP      10.96.0.1        <none>        443/TCP                      5d5h
nginx-ingress-controller        LoadBalancer   10.111.183.151   <pending>     80:31974/TCP,443:32396/TCP   54m
nginx-ingress-default-backend   ClusterIP      10.104.30.84     <none>        80/TCP                       54m
web                             NodePort       10.102.38.233    <none>        8080:31887/TCP               24m
web2                            NodePort       10.108.203.191   <none>        8080:32405/TCP               23m

For the ingress, we are going to use the one provided in the question but we have to change the backends:

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: tweeta-ingress
spec:
  rules:
  - host: tweeta.abc.com
    http:
      paths:
      - path: /
        backend:
          serviceName: web
          servicePort: 8080
      - path: /api
        backend:
          serviceName: web2
          servicePort: 8080          
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: tweeta-ingress-v2
spec:
  rules:
  - host: app.abc.com
    http:
      paths:
      - path: /tweeta
        backend:
          serviceName: web
          servicePort: 8080
      - path: /tweeta/api
        backend:
          serviceName: web2
          servicePort: 8080     

Now let's test our ingresses:

$ curl tweeta.abc.com
Hello, world!
Version: 1.0.0
Hostname: web-6785d44d5-j8bgk

$ curl tweeta.abc.com/api
Hello, world!
Version: 2.0.0
Hostname: web2-8474c56fd-lx55n

$ curl app.abc.com/tweeta
Hello, world!
Version: 1.0.0
Hostname: web-6785d44d5-j8bgk

$ curl app.abc.com/tweeta/api
Hello, world!
Version: 2.0.0
Hostname: web2-8474c56fd-lx55n

As can be seen, everything is working fine with no mods in your ingresses.

-- Mark Watney
Source: StackOverflow