Isito Ingress Controller Virtual Service returning 503

5/7/2019

I have created an AKS cluster with below versions.

Kubernetes version: 1.12.6
Istio version: 1.1.4
Cloud Provider: Azure

I have also successfully installed Istio as my Ingress gateway with an external IP address. I have also enabled istio-injection for the namespace where I have deployed my service. and I see that the sidecar injection is happening successfully. it is showing.

NAME                                      READY   STATUS    RESTARTS   AGE
club-finder-deployment-7dcf4479f7-8jlpc   2/2     Running   0          11h
club-finder-deployment-7dcf4479f7-jzfv7   2/2     Running   0          11h

My tls-gateway

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: tls-gateway
  namespace: istio-system
spec:
  selector:
    istio: ingressgateway
  servers:
  - port:
      name: https
      number: 443
      protocol: HTTPS
    tls:
      mode: SIMPLE
      serverCertificate: /etc/istio/ingressgateway-certs/tls.crt
      privateKey: /etc/istio/ingressgateway-certs/tls.key
    hosts:
    - "*"

Note: I am using self-signed certs for testing.

I have applied below virtual service

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: club-finder-service-rules
  namespace: istio-system
spec:
  # https://istio.io/docs/reference/config/istio.networking.v1alpha3/#VirtualService
  gateways: # The default `mesh` value for when left blank is doesn't seem to propigate the rule properly. For now, always use a list of FQDN gateways
    - tls-gateway
  hosts:
    - "*" # APIM Manager URL
  http:
  - match:
    - uri:
        prefix: /dev/clubfinder/service/clubs
    rewrite:
      uri: /v1/clubfinder/clubs/
    route:
    - destination:
        host: club-finder.club-finder-service-dev.svc.cluster.local
        port:
          number: 8080
  - match:
    - uri:
        prefix: /dev/clubfinder/service/status
    rewrite:
      uri: /status
    route:
    - destination:
        host: club-finder.club-finder-service-dev.svc.cluster.local
        port:
          number: 8080

Now when I am trying to test my service using Ingress external IP like

curl -kv https://<external-ip-of-ingress>/dev/clubfinder/service/status

I get below error

* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x7fe5e800d600)
> GET /dev/clubfinder/service/status HTTP/2
> Host: x.x.x.x --> Replacing IP intentionally
> User-Agent: curl/7.54.0
> Accept: */*
> 
* Connection state changed (MAX_CONCURRENT_STREAMS updated)!
< HTTP/2 503 
< date: Tue, 07 May 2019 05:15:01 GMT
< server: istio-envoy
< 
* Connection #0 to host x.x.x.x left intact

Can someone please point me out what is wrong here

-- Shantanoo K
azure
istio
kubernetes
kubernetes-ingress

2 Answers

5/8/2019

I was incorrectly defining my "VirtualService" yaml. Instead of using default HTTP port 80 i was mentioning 8080 which is my applications listening port. Below yaml worked for me

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: club-finder-service-rules
  namespace: istio-system
spec:
  # https://istio.io/docs/reference/config/istio.networking.v1alpha3/#VirtualService
  gateways: # The default `mesh` value for when left blank is doesn't seem to propigate the rule properly. For now, always use a list of FQDN gateways
    - tls-gateway
  hosts:
    - "*" # APIM Manager URL
  http:
  - match:
    - uri:
        prefix: /dev/clubfinder/service/clubs
    rewrite:
      uri: /v1/clubfinder/clubs/
    route:
    - destination:
        host: club-finder.club-finder-service-dev.svc.cluster.local
        port:
          number: 80
  - match:
    - uri:
        prefix: /dev/clubfinder/service/status
    rewrite:
      uri: /status
    route:
    - destination:
        host: club-finder.club-finder-service-dev.svc.cluster.local
        port:
          number: 80
-- Shantanoo K
Source: StackOverflow

5/15/2019

For the future reference, if you have issue like this, there are basically two main steps to troubleshoot:

1) Check Envoy proxies are up and their configs are synchronized with Pilot

istioctl proxy-config

2) Get Envoy's listeners for your pod and see if anything is listening a port on which your service is running

istioctl proxy-config listener club-finder-deployment-7dcf4479f7-8jlpc

So, in your case at step #2 you would see that there was no listener for port 80 , pointing out to a root cause.

Also, if you'd take a look to envoy logs, you'd probably see errors with UF (upstream failure) or UH (No healthy upstream) code. Here is a full list of error flags.

For a more deep Envoy debugging refer to this handbook

-- A_Suh
Source: StackOverflow