How does Istio support IP based routing between pods in the same Service (or ReplicaSet to be more specific)?
We would like to deploy a Tomcat application with replica > 1 within an Istio mesh. The app runs Infinispan, which is using JGroups to sort out communication and clustering. JGroups need to identify its cluster members and for that purpose there is the KUBE_PING (Kubernetes discovery protocol for JGroups). It will consult K8S API with a lookup comparable to kubectl get pods. The cluster members can be both pods in other services and pods within the same Service/Deployment.
Despite our issue being driven by rather specific needs the topic is generic. How do we enable pods to communicate with each other within a replica set?
Example: as a showcase we deploy the demo application https://github.com/jgroups-extras/jgroups-kubernetes. The relevant stuff is:
apiVersion: v1
items:
- apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: ispn-perf-test
namespace: my-non-istio-namespace
spec:
replicas: 3
< -- edited for brevity -- >
Running without Istio, the three pods will find each other and form the cluster. Deploying the same with Istio in my-istio-namespace and adding a basic Service definition:
kind: Service
apiVersion: v1
metadata:
name: ispn-perf-test-service
namespace: my-istio-namespace
spec:
selector:
run : ispn-perf-test
ports:
- protocol: TCP
port: 7800
targetPort: 7800
name: "one"
- protocol: TCP
port: 7900
targetPort: 7900
name: "two"
- protocol: TCP
port: 9000
targetPort: 9000
name: "three"
Note that output below is wide - you might need to scroll right to get the IPs
kubectl get pods -n my-istio-namespace -o wide
NAME READY STATUS RESTARTS AGE IP NODE
ispn-perf-test-558666c5c6-g9jb5 2/2 Running 0 1d 10.44.4.63 gke-main-pool-4cpu-15gb-98b104f4-v9bl
ispn-perf-test-558666c5c6-lbvqf 2/2 Running 0 1d 10.44.4.64 gke-main-pool-4cpu-15gb-98b104f4-v9bl
ispn-perf-test-558666c5c6-lhrpb 2/2 Running 0 1d 10.44.3.22 gke-main-pool-4cpu-15gb-98b104f4-x8ln
kubectl get service ispn-perf-test-service -n my-istio-namespace
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ispn-perf-test-service ClusterIP 10.41.13.74 <none> 7800/TCP,7900/TCP,9000/TCP 1d
Guided by https://istio.io/help/ops/traffic-management/proxy-cmd/#deep-dive-into-envoy-configuration, let's peek into the resulting Envoy conf of one of the pods:
istioctl proxy-config listeners ispn-perf-test-558666c5c6-g9jb5 -n my-istio-namespace
ADDRESS PORT TYPE
10.44.4.63 7900 TCP
10.44.4.63 7800 TCP
10.44.4.63 9000 TCP
10.41.13.74 7900 TCP
10.41.13.74 9000 TCP
10.41.13.74 7800 TCP
< -- edited for brevity -- >
The Istio doc describes the listeners above as
Receives outbound non-HTTP traffic for relevant IP:PORT pair from listener
0.0.0.0_15001
and this all makes sense. The pod ispn-perf-test-558666c5c6-g9jb5 can reach itself on 10.44.4.63 and the service via 10.41.13.74. But... what if the pod sends packets to 10.44.4.64 or 10.44.3.22? Those IPs are not present among the listeners so afaik the two "sibling" pods are non-reachable for ispn-perf-test-558666c5c6-g9jb5.
Can Istio support this today - then how?
You are right that HTTP routing only supports local access or remote access by service name or service VIP.
That said, for your particular example, above, where the service ports are named "one", "two", "three", the routing will be plain TCP as described here. Therefore, your example should work. The pod ispn-perf-test-558666c5c6-g9jb5 can reach itself on 10.44.4.63 and the other pods at 10.44.4.64 and 10.44.3.22.
If you rename the ports to "http-one", "http-two", and "http-three" then HTTP routing will kick in and the RDS config will restrict the remote calls to ones using recognized service domains.
To see the difference in the RDF config look at the output from the following command when the port is named "one", and when it is changed to "http-one".
istioctl proxy-config routes ispn-perf-test-558666c5c6-g9jb5 -n my-istio-namespace --name 7800 -o json
With the port named "one" it will return no routes, so TCP routing will apply, but in the "http-one" case, the routes will be restricted.
I don't know if there is a way to add additional remote pod IP addresses to the RDS domains in the HTTP case. I would suggest opening an Istio issue, to see if it's possible.