I have a container with IBM MQ (Docker image ibmcom/mq/9.2.2.0-r1
) exposing two ports (9443 - admin, 1414 - application).
All required setup in OpenShift is done (Pod, Service, Routes).
There are two routes, one for each port.
pointing to the ports accordingly (external ports are default http=80, https=443).
Admin console is accessible through the first route, hence, MQ is up and running.
I tried to connect as a client (JMS 2.0, com.ibm.mq.allclient:9.2.2.0
) using standard approach:
var fctFactory = JmsFactoryFactory.getInstance(WMQConstants.WMQ_PROVIDER);
var conFactory = fctFactory.createConnectionFactory();
// ... other props
conFactory.setObjectProperty(WMQConstants.WMQ_HOST_NAME, "route-app.my.domain");
conFactory.setObjectProperty(WMQConstants.WMQ_PORT, 443);
and failed to connect. Also tried to redefine route as HTTP and use port 80, and again without success.
If it helps let's assume we use the latest version of MQ Explorer as a client.
Each time the same connection error appears:
...
Caused by: com.ibm.mq.MQException: JMSCMQ0001:
IBM MQ call failed with compcode '2' ('MQCC_FAILED') reason '2009' ('MQRC_CONNECTION_BROKEN').
...
Caused by: com.ibm.mq.jmqi.JmqiException:
CC=2;RC=2009;AMQ9204: Connection to host 'route-app.my.domain(443)' rejected.
[1=com.ibm.mq.jmqi.JmqiException[CC=2;RC=2009;AMQ9208:
Error on receive from host 'route-app.my.domain/10.227.248.2:443 (route-app.my.domain)'.
[1=-1,2=ffffffff,3=route-app.my.domain/10.227.248.2:443 (route-app.my.domain),4=TCP]],
3=route-app.my.domain(443),5=RemoteConnection.receiveTSH]
...
Caused by: com.ibm.mq.jmqi.JmqiException: CC=2;RC=2009;AMQ9208:
Error on receive from host 'route-app.my.domain/10.227.248.2:443
Maybe, this article could give some hints about error code 2009, but still not sure what exactly affects connection errors from the OpenShift side.
Previously, I always connected to IBM MQ specifying a port value explicitly, but here is a bit different situation.\ How to connect to IBM MQ in OpenShift cluster through TCP?
Configurations in OpenShift are as follows:
kind: Pod
apiVersion: v1
metadata:
name: ibm-mq
labels:
app: ibm-mq
spec:
containers:
- resources:
limits:
cpu: '1'
memory: 600Mi
requests:
cpu: '1'
memory: 600Mi
name: ibm-mq
ports:
- containerPort: 1414
protocol: TCP
- containerPort: 9443
protocol: TCP
containerStatuses:
image: 'nexus-ci/docker-lib/ibm_mq:latest'
---
kind: Service
apiVersion: v1
metadata:
name: ibm-mq
spec:
ports:
- name: admin
protocol: TCP
port: 9443
targetPort: 9443
- name: application
protocol: TCP
port: 1414
targetPort: 1414
selector:
app: ibm-mq
---
kind: Route
apiVersion: route.openshift.io/v1
metadata:
name: ibm-mq-admin
spec:
host: ibm-mq-admin.my-domain.com
to:
kind: Service
name: ibm-mq
weight: 100
port:
targetPort: admin
tls:
termination: passthrough
insecureEdgeTerminationPolicy: None
wildcardPolicy: None
---
kind: Route
apiVersion: route.openshift.io/v1
metadata:
name: ibm-mq-app
spec:
host: ibm-mq-app.my-domain.com
to:
kind: Service
name: ibm-mq
weight: 100
port:
targetPort: application
tls:
termination: passthrough
insecureEdgeTerminationPolicy: None
wildcardPolicy: None
---
UPDATE: Ended up with creating and deploying to OpenShift a small web-application receiving HTTP requests and interacting with MQ via JMS (put/get text messages), like:
POST /queue/{queueName}/send + <body>
;GET /queue/{queueName}/receive
.It interacts with MQ inside the OpenShift cluster using TCP, and accepts external HTTP connections as a regular web application.\ Other solutions seem to take too much efforts, but I accepted one of them as it is theoretically correct and straightforward.
The following Java system property will be read by IBM MQ classes for JMS at 9.2.1 and higher to tell it to set the SNI header to the hostname of the remote system when initiating a TLS connection:
com.ibm.mq.cfg.SSL.OutboundSNI=HOSTNAME
To set this programmatically just use the System.setProperty
method for example:
System.setProperty("com.ibm.mq.cfg.SSL.OutboundSNI","HOSTNAME");
NOTE: the string HOSTNAME
is literal and not meant to be replaced by a actual hostname.
If you can not move to a com.ibm.mq.allclient.jar
from 9.2.1 or later, then in 9.2.0.0 and later you could instead use com.ibm.mq.cfg.SSL.AllowOutboundSNI=NO
, but this is deprecated in 9.2.1 and later.
I'm not sure to fully understand your setup, but"Routes"
only route HTTP traffic (On ports 80 or 443 onyl), not TCP traffic.
If you want to access your MQ server from outside the cluster, there are a few solutions, one is to create a service of type: "NodePort"
Your Service is not a NodePort Service. In your case, it should be something like
kind: Service
apiVersion: v1
metadata:
name: ibm-mq
spec:
type: NodePort
ports:
- port: 1414
targetPort: 1414
nodePort: 30001
selector:
app: ibm-mq
Then access from outside with anyname.\<cluster domaine>:30001
And delete the useless corresponding route. As said before, I assumed you read in the doc I pointed to you that says that route only route HTTP traffic on port 80 or 443.
Doc: https://kubernetes.io/docs/concepts/services-networking/service/#nodeport