Unable to expose gRPC server with istio

1/21/2021

I am not good at English so I apologize if I say something strange.

Now I am developing a gRPC server on GKE with istio and my server works correctly when I call from another pod inside my cluster with DNS. However, calls from outside the cluster always return "context deadline exceeded".

I implemented the deployment named ms-user that has the pods that my grpc servers are running on with port 5000 and following resources in the namespace "default".

apiVersion: v1
kind: Service
metadata:
  name: ms-user
spec:
  selector:
    app: ms-user
  ports:
  - name: grpc
    protocol: TCP
    port: 5000
    targetPort: 5000
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: gateway-dev
spec:
  selector:
    istio: ingressgateway
  servers:
  - hosts:
    - '*'
    port:
      name: grpc
      number: 5000
      protocol: GRPC
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: ms-user-rule-dev
spec:
  host: ms-user
  trafficPolicy:
    loadBalancer:
      simple: ROUND_ROBIN
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: vs-dev
spec:
  hosts:
  - "*"
  gateways:
  - gateway-dev
  grpc:
  - match:
    - port: 5000
    route:
    - destination:
        host: ms-user
        port:
          number: 5000

And I deployed following manifests to namespace "istio-system":

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: gke-ingress
  namespace: istio-system
  annotations:
    kubernetes.io/ingress.global-static-ip-name: "istio-endpoint-dev"
spec:
  rules:
  - http:
      paths:
      - backend:
          serviceName: istio-ingressgateway
          servicePort: 5000
---
apiVersion: v1
kind: Service
metadata:
  name: istio-ingressgateway
  namespace: istio-system
  ...

spec:
  ...
  ports:
  ...
  - name: grpc
    nodePort: 30001
    port: 5000
    protocol: TCP
    targetPort: 5000
  
  selector:
    app: istio-ingressgateway
    istio: ingressgateway
    release: istio
  sessionAffinity: None
  type: LoadBalancer
...

Then, I am testing with grpcurl.

$ grpcurl -plaintext -proto=PATH_TO_PROTO MY_gke_ingress_STATIC_IP:5000 foo.FooService.Foo
Failed to dial target host "IPADRESS:5000": context deadline exceeded

I can call it from some pods.

$ kubectl exec -it foo-pod -- bash
> grpcurl -plaintext -proto=PATH_TO_PROTO ms-user:5000 foo.FooService.Foo
{"result": "OK"}

How can i fix it..? Thank you,


P.S.

$ kubectl describe -n istio-system svc istio-ingressgateway
Name:                     istio-ingressgateway
Namespace:                istio-system
Labels:                   addonmanager.kubernetes.io/mode=Reconcile
                          app=istio-ingressgateway
                          chart=gateways
                          heritage=Tiller
                          istio=ingressgateway
                          k8s-app=istio
                          kubernetes.io/cluster-service=true
                          release=istio
Annotations:              <none>
Selector:                 app=istio-ingressgateway,istio=ingressgateway,release=istio
Type:                     LoadBalancer
IP Families:              <none>
IP:                       10.99.253.103
IPs:                      <none>
IP:                       34.84.8.68
LoadBalancer Ingress:     34.84.8.68
Port:                     grpc  5000/TCP
TargetPort:               5000/TCP
NodePort:                 grpc  30002/TCP
Endpoints:                10.96.2.46:5000
Port:                     status-port  15020/TCP
TargetPort:               15020/TCP
NodePort:                 status-port  31198/TCP
Endpoints:                10.96.2.46:15020
Port:                     http2  80/TCP
TargetPort:               80/TCP
NodePort:                 http2  31829/TCP
Endpoints:                10.96.2.46:80
Port:                     https  443/TCP
TargetPort:               443/TCP
NodePort:                 https  31765/TCP
Endpoints:                10.96.2.46:443
Port:                     tcp  31400/TCP
TargetPort:               31400/TCP
NodePort:                 tcp  31457/TCP
Endpoints:                10.96.2.46:31400
Port:                     https-kiali  15029/TCP
TargetPort:               15029/TCP
NodePort:                 https-kiali  32025/TCP
Endpoints:                10.96.2.46:15029
Port:                     https-prometheus  15030/TCP
TargetPort:               15030/TCP
NodePort:                 https-prometheus  30814/TCP
Endpoints:                10.96.2.46:15030
Port:                     https-grafana  15031/TCP
TargetPort:               15031/TCP
NodePort:                 https-grafana  31953/TCP
Endpoints:                10.96.2.46:15031
Port:                     https-tracing  15032/TCP
TargetPort:               15032/TCP
NodePort:                 https-tracing  31170/TCP
Endpoints:                10.96.2.46:15032
Port:                     tls  15443/TCP
TargetPort:               15443/TCP
NodePort:                 tls  31927/TCP
Endpoints:                10.96.2.46:15443
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
-- Okada Yuya
gke-networking
google-kubernetes-engine
grpc
istio
kubernetes

0 Answers