Google Kubernetes Engine: How to define one Ingress for multiple namespaces?

11/6/2019

On GKE, K8s Ingress are LoadBalancers provided by Compute Engine which have some cost. Example for 2 months I payed 16.97€.

In my cluster I have 3 namespaces (default, dev and prod) so to reduce cost I would like to avoid spawning 3 LoadBalancers. The question is how to configure the current one to point to the right namespace?

GKE requires the ingress's target Service to be of type NodePort, I am stuck because of that constraint.

I would like to do something like:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
 namespace: dev
 annotations: # activation certificat ssl
   kubernetes.io/ingress.global-static-ip-name: lb-ip-adress     
spec:
 hosts:
    - host: dev.domain.com 
      http:
        paths:
          - path: /*
            backend:
              serviceName: dev-service # This is the current case, 'dev-service' is a NodePort
              servicePort: http

    - host: domain.com 
      http:
        paths:
          - path: /*
            backend:
              serviceName: prod-service # This service lives in 'dev' namespace and is of type ExternalName. Its final purpose is to point to the real target service living in 'prod' namespace.
              servicePort: http

    - host: www.domain.com 
      http:
        paths:
          - path: /* 
            backend:
              serviceName: prod-service
              servicePort: http

As GKE requires service to be NodePort I am stuck with prod-service.

Any help will be appreciated.

Thanks a lot

-- akuma8
google-kubernetes-engine
kubernetes
kubernetes-ingress

2 Answers

11/6/2019

You can use the nginx-ingress controller. It is far more flexible and will only use one GCP load balancer for all ingress objects. Paired with cert-manager, you can have free SSL that is essentially fully managed.

-- verdverm
Source: StackOverflow

2/21/2020

OK here is what I have been doing. I have only one ingress with one backend service to nginx.

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: ingress
spec:
  backend:
    serviceName: nginx-svc
    servicePort: 80

And In your nginx deployment/controller you can define the config-maps with typical nginx configuration. This way you use one ingress and target mulitple namespaces.

apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx-config
data:
  default.conf: |
    server {
      listen 80;
      listen [::]:80;
      server_name  _;

      location /{
        add_header Content-Type text/plain;
        return 200 "OK.";
      }

      location /segmentation {
        proxy_pass http://myservice.mynamespace.svc.cluster.local:80;
      }
    }

And the deployment will use the above config of nginx via config-map

apiVersion: extensions/v1
kind: Deployment
metadata:
  labels:
    app: nginx
  name: nginx
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: nginx
    spec:
      #podAntiAffinity will not let two nginx pods to run in a same node machine
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchExpressions:
                  - key: app
                    operator: In
                    values:
                      - nginx
              topologyKey: kubernetes.io/hostname
      containers:
        - name: nginx
          image: nginx
          ports:
            - containerPort: 80
          volumeMounts:
            - name: nginx-configs
              mountPath: /etc/nginx/conf.d
          livenessProbe:
            httpGet:
              path: /
              port: 80
      # Load the configuration files for nginx
      volumes:
        - name: nginx-configs
          configMap:
            name: nginx-config

---

  apiVersion: v1
  kind: Service
  metadata:
    name: nginx-svc
  spec:
    selector:
      app: nginx
    type: NodePort
    ports:
      - protocol: "TCP"
        nodePort: 32111
        port: 80

This way you can take advantage of ingress features like tls/ssl termination like managed by google or cert-manager and also if you want you can also have your complex configuration inside nginx too.

-- Prata
Source: StackOverflow