Istio interpreting regular service as his thus throwing port error

3/6/2020

I have an application with Statefulset and it's Service objects. Once I introduce Istio, it doesn't interpret the service normally. Here is what I mean,

regular service.yaml

apiVersion: v1
kind: Service
metadata:
  labels:
  name: svc-example
spec:
  ports:
  - name: https
    port: 443
    protocol: TCP
    targetPort: 8443
  type: ClusterIP

But what Istio expects is something like

apiVersion: v1
kind: Gateway 
metadata:
  labels:
  name: svc-example
spec:
  ports:
  - name: https
    number: 443  ----------->>>> notice the difference here
    protocol: TCP
    targetPort: 8443
  type: ClusterIP

The actual service that is not working is this

➜  gluu git:(istio-int) ✗ kubectl get svc opendj -o yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    app: opendj
    app.kubernetes.io/instance: test
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/version: 4.1.0_01
    helm.sh/chart: opendj-1.0.1
  name: opendj
  namespace: default
spec:
  clusterIP: None
  ports:
  - name: tcp-admin
    port: 4444
    protocol: TCP
    targetPort: 4444
  - name: tcp-ldap
    port: 1389
    protocol: TCP
    targetPort: 1389
  - name: tcp-ldaps
    port: 1636
    protocol: TCP
    targetPort: 1636
  - name: tcp-repl
    port: 8989
    protocol: TCP
    targetPort: 8989
  selector:
    app: opendj
  type: ClusterIP

The error I am getting from the container

INFO - entrypoint - 2020-03-08 17:53:53,234 - Installing OpenDJ.
WARNING - entrypoint - 2020-03-08 17:53:57,640 - Exception in thread "main" java.lang.IllegalArgumentException: Invalid network port provided: 0 is not included in the [1, 65535] range.

And the logs from istio-proxy are

[Envoy (Epoch 0)] [2020-03-08 17:53:38.075][13][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:54] Unable to establish new stream
2020-03-08T17:53:39.036239Z     info    Envoy proxy is NOT ready: server is not live, current state is: INITIALIZING
2020-03-08T17:53:40.933659Z     info    Envoy proxy is ready
[2020-03-08T17:53:48.665Z] "- - -" 0 - "-" "-" 2921 1119187 40 - "-" "-" "-" "-" "192.168.64.17:8443" outbound|443||kubernetes.default.svc.cluster.local 172.17.0.18:47956 10.96.0.1:443 172.17.0.18:47244 - -
[2020-03-08T17:53:48.613Z] "- - -" 0 - "-" "-" 2927 9047 93 - "-" "-" "-" "-" "192.168.64.17:8443" outbound|443||kubernetes.default.svc.cluster.local 172.17.0.18:47952 10.96.0.1:443 172.17.0.18:47240 - -
[2020-03-08T17:54:11.469Z] "- - -" 0 UF,URX "-" "-" 0 0 0 - "-" "-" "-" "-" "127.0.0.1:4444" inbound|4444|tcp-admin|opendj.default.svc.cluster.local - 172.17.0.18:4444 172.17.0.18:54068 - -
[2020-03-08T17:54:15.997Z] "- - -" 0 UF,URX "-" "-" 0 0 0 - "-" "-" "-" "-" "127.0.0.1:4444" inbound|4444|tcp-admin|opendj.default.svc.cluster.local - 172.17.0.18:4444 172.17.0.18:54146 - -
[2020-03-08T17:54:19.762Z] "- - -" 0 UF,URX "-" "-" 0 0 0 - "-" "-" "-" "-" "127.0.0.1:4444" inbound|4444|tcp-admin|opendj.default.svc.cluster.local - 172.17.0.18:4444 172.17.0.18:54198 - -
[2020-03-08T17:54:23.983Z] "- - -" 0 UF,URX "-" "-" 0 0 0 - "-" "-" "-" "-" "127.0.0.1:4444" inbound|4444|tcp-admin|opendj.default.svc.cluster.local - 172.17.0.18:4444 172.17.0.18:54262 - -
[2020-03-08T17:54:28.039Z] "- - -" 0 UF,URX "-" "-" 0 0 0 - "-" "-" "-" "-" "127.0.0.1:4444" inbound|4444|tcp-admin|opendj.default.svc.cluster.local - 172.17.0.18:4444 172.17.0.18:54336 - -
[2020-03-08T17:54:32.005Z] "- - -" 0 UF,URX "-" "-" 0 0 0 - "-" "-" "-" "-" "127.0.0.1:4444" inbound|4444|tcp-admin|opendj.default.svc.cluster.local - 172.17.0.18:4444 172.17.0.18:54396 - -

While debugging I noticed the port number error comes up because of using

port:
   port: 1234

Instead of

port:
   number: 1234

I don't understand why this is happening to only one service and all others are being processed okay. Even if I create a gateway and a virtual service and leave that service as is it still doesn't work.

Follow up question on this would be if I use a gateway and a virtualService would I still need the regular k8s Service?

Any leads please.

-- Shammir
istio
kubernetes

1 Answer

3/10/2020

This is because istio is looking for additional information in K8s Service objects.

For example the port name is used by istio:

spec:
  clusterIP: None
  ports:
  - name: tcp-admin

In case of port number it is used by Gateway. So when it is required it is looking for that information from endpoint definition. Which in this case it happens to be headless K8s Service object that points at deployment pods that match it's labels. When this information is not needed the default value is assumed which could happen to match in other cases.

From the source code from istio github page the port number (s.Port.Number) and port name combination is used to distinguish individual servers for Gateway RDS in diffrent cases.


The K8s Service is needed to get a working endpoint for VirtualService. The Service should be designed with istio components attributes in mind.

Hope it helps.

-- Piotr Malec
Source: StackOverflow