Weird way of accessing Cloud Run on GKE services

6/18/2019

I am following this tutorial to perform a so called quickstart on gcp's cloud run and experiment a bit with it.

Some delays and inconsistencies about announced and typical service availability aside, the scripted steps went well.

What I want to ask (couldn't find any documentation or explanation about it) is why, in order for me to access the service I need to pass to curl a specific Host header as indicated by the relevant tutorial:

curl -v -H "Host: hello.default.example.com" YOUR-IP

Where YOUR-IP is the public IP of the Load Balancer created by istio-managed ingress gatewau

-- pkaramol
google-cloud-platform
google-cloud-run
istio
knative
kubernetes

3 Answers

6/18/2019

As mentioned by Jose Armesto’s answer, it’s simply because Cloud Run on GKE uses Knative, which uses Istio. Istio ingress gateway receives all traffic to all your Cloud Run services, and proxies them to the right place based on registered hostnames of the service.

If you Map custom domains using Cloud Run and actually set up your domain‘s DNS records to point to the ingress gateway of your Cloud Run on GKE set up, you won't need it as you will actually have a domain name that’s used in the Host header, and recognized by the gateway. So the traffic will flow to the right place.

-- AhmetB - Google
Source: StackOverflow

6/18/2019

Most proxies that handle external traffic match requests based on the Host header. They use what's inside the Host header to decide which service send the request to. Without the Host header, they wouldn't know where to send the request.

Host-based routing is what enables virtual servers on web servers. It’s also used by application services like load balancing and ingress controllers to achieve the same thing. One IP address, many hosts.

Host-based routing allows you to send a request for api.example.com and for web.example.com to the same endpoint with the certainty it will be delivered to the correct back-end application.

That's typical in proxies/load balancers that are multi-tenant, meaning they handle traffic for totally different tenants/applications sitting behind the proxy.

-- Jose Armesto
Source: StackOverflow

6/19/2019

All answers given are correct more or less but I would like to post a more concrete description of the situation I came about after some digging.

As pointed out by other fellows, in GKE-based cloud run, istio manages routing. Therefore, by default (and unless there is a way to override that behavior), istio will create

  • an istio ingress gateway handling your incoming traffic

  • a virtual service with the routing rules for the specific container you spin up via gcloud cloud run deploy ...

So I discovered this resource

➣ $ kubectl get virtualservice --all-namespaces
NAMESPACE         NAME                                         AGE
knative-serving   route-eaee65aa-91c8-11e9-be08-42010a8000e2   17h

whose description and the corresponding host-based routing rules, explain the need for passing the specific `Host

➣ $ kubectl describe virtualservice route-eaee65aa-91c8-11e9-be08-42010a8000e2 --namespace knative-serving
Name:         route-eaee65aa-91c8-11e9-be08-42010a8000e2
Namespace:    knative-serving
Labels:       networking.internal.knative.dev/clusteringress=route-eaee65aa-91c8-11e9-be08-42010a8000e2
              serving.knative.dev/route=hello
              serving.knative.dev/routeNamespace=default
Annotations:  networking.knative.dev/ingress.class=istio.ingress.networking.knative.dev
API Version:  networking.istio.io/v1alpha3
Kind:         VirtualService
Metadata:
  Creation Timestamp:  2019-06-18T12:59:42Z
  Generation:          1
  Owner References:
    API Version:           networking.internal.knative.dev/v1alpha1
    Block Owner Deletion:  true
    Controller:            true
    Kind:                  ClusterIngress
    Name:                  route-eaee65aa-91c8-11e9-be08-42010a8000e2
    UID:                   f0a40244-91c8-11e9-be08-42010a8000e2
  Resource Version:        5416
  Self Link:               /apis/networking.istio.io/v1alpha3/namespaces/knative-serving/virtualservices/route-eaee65aa-91c8-11e9-be08-42010a8000e2
  UID:                     f0a51032-91c8-11e9-be08-42010a8000e2
Spec:
  Gateways:
    knative-ingress-gateway
    mesh
  Hosts:
    hello.default.example.com
    hello.default.svc.cluster.local
  Http:
    Append Headers:
      Knative - Serving - Namespace:  default
      Knative - Serving - Revision:   hello-8zgvn
    Match:
      Authority:
        Regex:  ^hello\.default(?::\d{1,5})?$
      Authority:
        Regex:  ^hello\.default\.example\.com(?::\d{1,5})?$
      Authority:
        Regex:  ^hello\.default\.svc(?::\d{1,5})?$
      Authority:
        Regex:  ^hello\.default\.svc\.cluster\.local(?::\d{1,5})?$
    Retries:
      Attempts:         3
      Per Try Timeout:  10m0s
    Route:
      Destination:
        Host:  activator-service.knative-serving.svc.cluster.local
        Port:
          Number:       80
      Weight:           100
    Timeout:            10m0s
    Websocket Upgrade:  true
Events:                 <none>

What is more, in case you add a custom domain mapping it turns out GCP takes care the routing by creating an additional virtual service in the default namespace this time

➣ $  kubectl get virtualservice --all-namespaces
NAMESPACE         NAME                                         AGE
default           cloudrun.mydomain.com                        13m
knative-serving   route-23ad36f5-9326-11e9-b945-42010a800057   31m
-- pkaramol
Source: StackOverflow