Unable to open Istio ingress-gateway for gRPC

9/23/2020

This question is about my inability to connect a gRPC client to a gRPC service hosted in Kubernetes (AWS EKS), with an Istio ingress gateway.

On the kubernetes side: I have a container with a Go process listening on port 8081 for gRPC. The port is exposed at the container level. I define a kubernetes service and expose 8081. I define an istio gateway which selects istio: ingressgateway and opens port 8081 for gRPC. Finally I define an istio virtualservice with a route for anything on port 8081.

On the client side: I have a Go client which can send gRPC requests to the service.

  • It works fine when I kubectl port-forward -n mynamespace service/myservice 8081:8081 and call my client via client -url localhost:8081.
  • When I close the port forward, and call with client -url [redacted]-[redacted].us-west-2.elb.amazonaws.com:8081 my client fails to connect. (That url is the output of kubectl get svc istio-ingressgateway -n istio-system -o jsonpath='{.status.loadBalancer.ingress[0].hostname}' with :8081 appended.

Logs:

  • I looked at the istio-system/istio-ingressgateway service logs. I do not see an attempted connection.
  • I do see the bookinfo connections I made earlier when going over the istio bookinfo tutorial. That tutorial worked, I was able to open a browser and see the bookinfo product page, and the ingressgateway logs show "GET /productpage HTTP/1.1" 200. So the Istio ingress-gateway works, it's just that I don't know how to configure it for a new gRPC endpoint.

Istio's Ingress-Gateway

kubectl describe service -n istio-system istio-ingressgateway

outputs the following, which I suspect is the problem, port 8081 is not listed despite my efforts to open it. I'm puzzled by how many ports are opened by default, I didn't open them (comments on how to close ports I don't use would be welcome but aren't the reason for this SO question)

Name:                     istio-ingressgateway
Namespace:                istio-system
Labels:                   [redacted]
Annotations:              [redacted]
Selector:                 app=istio-ingressgateway,istio=ingressgateway
Type:                     LoadBalancer
IP:                       [redacted]
LoadBalancer Ingress:     [redacted]
Port:                     status-port  15021/TCP
TargetPort:               15021/TCP
NodePort:                 status-port  31125/TCP
Endpoints:                192.168.101.136:15021
Port:                     http2  80/TCP
TargetPort:               8080/TCP
NodePort:                 http2  30717/TCP
Endpoints:                192.168.101.136:8080
Port:                     https  443/TCP
TargetPort:               8443/TCP
NodePort:                 https  31317/TCP
Endpoints:                192.168.101.136:8443
Port:                     tcp  31400/TCP
TargetPort:               31400/TCP
NodePort:                 tcp  31102/TCP
Endpoints:                192.168.101.136:31400
Port:                     tls  15443/TCP
TargetPort:               15443/TCP
NodePort:                 tls  30206/TCP
Endpoints:                192.168.101.136:15443
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

So I think I did not properly open port 8081 for GRPC. What other logs or test can I run to help identify where this is coming from?

Here is the relevant yaml:

Kubernetes Istio virtualservice: whose intent is to route anything on port 8081 to myservice

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: myservice
  namespace: mynamespace
spec:
  hosts:
  - "*" 
  gateways:
  - myservice
  http:
  - match:
    - port: 8081
    route:
    - destination:
        host: myservice

Kubernetes Istio gateway: whose intent is to open port 8081 for GRPC

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: myservice
  namespace: mynamespace
spec:
  selector:
    istio: ingressgateway 
  servers:
    - name: myservice-plaintext
      port:
        number: 8081
        name: grpc-svc-plaintext
        protocol: GRPC
      hosts:
      - "*"

Kubernetes service: showing port 8081 is exposed at the service level, which I confirmed with the port-forward test mentioned earlier

apiVersion: v1
kind: Service
metadata:
  name: myservice
  namespace: mynamespace
  labels:
    app: myservice
spec:
  selector:
    app: myservice
  ports:
    - protocol: TCP
      port: 8081
      targetPort: 8081
      name: grpc-svc-plaintext

Kubernetes deployment: showing port 8081 is exposed at the container level, which I confirmed with the port-forward test mentioned earlier

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myservice
  namespace: mynamespace
  labels:
    app: myservice
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myservice
  template:
    metadata:
      labels:
        app: myservice
    spec:
      containers:
      - name: myservice
        image: [redacted]
        ports:
        - containerPort: 8081

Re checking DNS works on the client:

getent hosts [redacted]-[redacted].us-west-2.elb.amazonaws.com

outputs 3 IP's, I'm assuming that's good.

[IP_1 redacted]  [redacted]-[redacted].us-west-2.elb.amazonaws.com
[IP_2 redacted]  [redacted]-[redacted].us-west-2.elb.amazonaws.com
[IP_3 redacted]  [redacted]-[redacted].us-west-2.elb.amazonaws.com

Checking Istio Ingressgateway's routes:

istioctl proxy-status istio-ingressgateway-[pod name]
istioctl proxy-config routes istio-ingressgateway-[pod name]

returns

Clusters Match
Listeners Match
Routes Match (RDS last loaded at Wed, 23 Sep 2020 13:59:41)

NOTE: This output only contains routes loaded via RDS.
NAME          DOMAINS     MATCH                  VIRTUAL SERVICE
http.8081     *           /*                     myservice.mynamespace
              *           /healthz/ready*        
              *           /stats/prometheus*

Port 8081 is routed to myservice.mynamespace, seems good to me.

UPDATE 1: I am starting to understand I can't open port 8081 using the default istio ingress gateway. That service does not expose that port, and I was assuming creating a gateway would update the service "under the hood" but that's not the case. The external ports that I can pick from are: 80, 443, 31400, 15443, 15021 and I think my gateway needs to rely only on those. I've updated my gateway and virtual service to use port 80 and the client then connects to the server just fine.

That means I have to differentiate between multiple services not by port (can't route from the same port to two services obviously), but by SNI, and I'm unclear how to do that in gRPC, I'm guessing I can add a Host:[hostname] in the gRPC header. Unfortunately, if that's how I can route, it means headers need to be read on the gateway, and that mandates terminating TLS at the gateway when I was hoping to terminate at the pod.

-- mipnw
grpc
istio
kubernetes
kubernetes-ingress

2 Answers

6/14/2021

I have added the port to ingress gateway successfully, but still I couldn't get client connected to server. For me too, port-forwarding works, but when I try to connected through ingress getting below error. Here istio ingressgateway is on GKE, so it's using global HTTPS load balancer.

Jun 14, 2021 8:28:08 PM com.manning.mss.ch12.grpc.sample01.InventoryClient updateInventory
WARNING: RPC failed: Status{code=INTERNAL, description=http2 exception, cause=io.grpc.netty.shaded.io.netty.handler.codec.http2.Http2Exception: First received frame was not SETTINGS. Hex dump for first 5 bytes: 485454502f
at io.grpc.netty.shaded.io.netty.handler.codec.http2.Http2Exception.connectionError(Http2Exception.java:85)
at io.grpc.netty.shaded.io.netty.handler.codec.http2.Http2ConnectionHandler$PrefaceDecoder.verifyFirstFrameIsSettings(Http2ConnectionHandler.java:350)
at io.grpc.netty.shaded.io.netty.handler.codec.http2.Http2ConnectionHandler$PrefaceDecoder.decode(Http2ConnectionHandler.java:251)
at io.grpc.netty.shaded.io.netty.handler.codec.http2.Http2ConnectionHandler.decode(Http2ConnectionHandler.java:450)
at io.grpc.netty.shaded.io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:502)
at io.grpc.netty.shaded.io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:441)
at io.grpc.netty.shaded.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:278)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:345)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:337)
at io.grpc.netty.shaded.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1408)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:345)
at io.grpc.netty.shaded.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:930)
at io.grpc.netty.shaded.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)
at io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:677)
at io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:612)
at io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:529)
at io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:491)
at io.grpc.netty.shaded.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:905)
at io.grpc.netty.shaded.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
-- Krishna ps
Source: StackOverflow

9/24/2020

I am starting to understand I can't open port 8081 using the default istio ingress gateway. That service does not expose that port, and I was assuming creating a gateway would update the service "under the hood" but that's not the case. The external ports that I can pick from are: 80, 443, 31400, 15443, 15021 and I think my gateway needs to rely only on those. I've updated my gateway and virtual service to use port 80 and the client then connects to the server just fine.

I'm not sure how exactly did you try to add custom port for ingress gateway but it's possible.

As far as I checked here it's possible to do in 3 ways, here are the options with links to examples provided by @A_Suh, @Ryota and @peppered.


Additional resources:


That means I have to differentiate between multiple services not by port (can't route from the same port to two services obviously), but by SNI, and I'm unclear how to do that in gRPC, I'm guessing I can add a Host:hostname in the gRPC header. Unfortunately, if that's how I can route, it means headers need to be read on the gateway, and that mandates terminating TLS at the gateway when I was hoping to terminate at the pod.

I see you have already create new question here, so let's just move there.

-- Jakub
Source: StackOverflow