Different ingresses in same cluster being bound to different addresses

7/17/2019

I am trying to build a kubernetes environment from scratch using Google's Deployment Manager and Kubernetes Engine. So far, the cluster is configured to host two apps. Each app is served by an exclusive service, which in turn receives traffic from an exclusive ingress. Both ingresses are created wit the same Deployment Manager jinja template:

- name: {{ NAME_PREFIX }}-ingress
  type: {{ CLUSTER_TYPE_BETA }}:{{ INGRESS_COLLECTION }}
  metadata:
    dependsOn:
    - {{ properties['cluster-type-v1beta1-extensions'] }}
  properties:
    apiVersion: extensions/v1beta1
    kind: Ingress
    namespace: {{ properties['namespace'] | default('default') }}
    metadata:
      name: {{ NAME_PREFIX }}
      labels:
        app: {{ env['name'] }}
        deployment: {{ env['deployment'] }}
    spec:
      rules:
      - host: {{ properties['host'] }}
        http:
          paths:
          - backend:
              serviceName: {{ NAME_PREFIX }}-svc
              servicePort: {{ properties['node-port'] }}

The environment deployment works fine. However, I was hoping that both ingresses would be bound to the same external address, which is not happening. How could I setup the template so that this restriction is enforced? More generally, is it considered a kubernetes bad practice to spawn one ingress for each one of the environment's host-based rules?

-- bsam
google-deployment-manager
google-kubernetes-engine
kubernetes

1 Answer

7/17/2019

Each ingress will create its own HTTP(s) load balancer. If you want a single IP, define a single ingress with multiple host paths, one for each service

-- Patrick W
Source: StackOverflow