I used to use Type: LoadBalancer to expose my service through reverse proxy. Everything worked fine without Istio. However, error happened when I applied Istio to my cluster.
I tried to expose my services in Kubernetes with Istio Ingress, but I think I misunderstand something when routing services with Istio.
I have 2 deployments in same namespace (See picture below):
1: Application (Bus-id)
2: Reverse Proxy of application (Bus-Proxy): Translate HTTP to gRPC
https://drive.google.com/file/d/1tby9_taJb9WMHi0ssO9Os7MQAWRMga6k/view?usp=sharing
Version:
Kubernetes version (AKS with RBAC enabled):
Client Version: v1.15.0
Server Version: v1.12.8
Istio version: 1.1.3 (AKS said that they tested on 1.1.3)
Helm:
Client: &version.Version{SemVer:"v2.13.0", GitCommit:"79d07943b03aea2b76c12644b4b54733bc5958d6", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.14.1", GitCommit:"5270352a09c7e8b6e8c9593002a73535276507c0", GitTreeState:"clean"}
I tried BookInfo example in Istio (https://istio.io/docs/examples/bookinfo/), it worked.
But, I tried Voting example in AKS (https://docs.microsoft.com/en-us/azure/aks/istio-scenario-routing), I can't access example with external load balancer's IP, it retun "timeout"
Deployment file:
1. bus-id.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: bus-id
namespace: smart-id
labels:
k8s-app: bus-id
spec:
selector:
matchLabels:
k8s-app: bus-id
template:
metadata:
name: bus-id
labels:
k8s-app: bus-id
spec:
containers:
- name: bus-id
image: mydockerhub/mydockerhub:bus-id
ports:
- containerPort: 50001
env:
- name: APP_NAME
value: bus-id
---
apiVersion: v1
kind: Service
metadata:
name: bus-id
namespace: smart-id
labels:
service: bus-id
spec:
ports:
- name: http
port: 50001
targetPort: 50001
protocol: TCP
selector:
k8s-app: bus-id
2. bus-proxy.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
k8s-app: bus-proxy
name: bus-proxy
namespace: smart-id
spec:
selector:
matchLabels:
k8s-app: bus-proxy
replicas: 1
template:
metadata:
labels:
k8s-app: bus-proxy
spec:
imagePullSecrets:
- name: duynd
containers:
- image: mydockerhub/mydockerhub:bus-proxy
name: bus-proxy
ports:
- containerPort: 40001
name: http
env:
- name: APP_NAME
value: bus-proxy
---
apiVersion: v1
kind: Service
metadata:
name: bus-proxy
namespace: smart-id
labels:
service: bus-proxy
spec:
ports:
- port: 8080
targetPort: 40001
protocol: TCP
selector:
k8s-app: bus-proxy
3. ingress.yaml
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: smartid-gateway
namespace: smart-id
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: smartid
namespace: smart-id
spec:
hosts:
- "*"
gateways:
- smart-id/smartid-gateway
http:
- match:
- uri:
prefix: /api
route:
- destination:
host: bus-proxy.smart-id.svc.cluster.local
port:
number: 8080
I expect It works with ingress-ip:ingress-port/api/my-function (method POST). However, it returns error 500, bus-proxy's POD also prints log (I think that the request came to bus-proxy successfully, but can go through to bus-id).
My problem wasn't in deployment. The problem was connection between 2 services inside, they're stuck with passing metadata. Check white metadata white list, if you're using gRPC.
First of all, if you run all the applications in AKS with the Istio, I will suggest you install the Istio following the steps that AKS provide in Install and use Istio in Azure Kubernetes Service (AKS).
Now, take a look at the example the AKS provided here and there is something you need to know:
The Istio has the proxy itself. So you need to choose which to use or use both proxies, but you need to make sure it supports two proxies.
And if you use the proxy of Istio, then you also need to enable the istio-injection
for the namespace of your application just like the example:
kubectl label namespace voting istio-injection=enabled
This label instructs Istio to automatically inject the istio-proxies as sidecars into all of your pods in this namespace. And you should use the right gateway for your virtual service in the ingress.yaml.