Simple way to monitor Camel's localhost broker - apache-camel

as stated, is there a simple way to monitor Camel's vm incoming messages to the embedded broker ?
<bean id="jms" class="org.apache.activemq.ActiveMQConnectionFactory">
<property name="brokerURL" value="vm://localhost?broker.persistent=false" />
</bean>
Thanks!

Your question isn't quite clear on what you would like to monitor.
-You could log what you send to the broker and read from it.
-You could connect to jmx to view a bunch of processing information.
-If you are on the FUSE platform there is a management console that exposes the jmx endpoints to a web url.
-You could also setup JBoss operations network to poll all of your jmx info and do trending on the data.
If I didn't cover the use case you had in mind please update your question with some details and ping me in a comment so I can get back to you.

Related

Messages are written to the queue after complete execution and the producer has stopped

I was faced with a broker's (ActiveMQ-Artemis version 2.17.0) behavior unusual for me.
With a large number of messages and when they are quickly sent by the manufacturer, some of the messages reach the queue after the complete execution and the manufacturer has stopped. This is especially evident when the hard drive is normal, not SSD.
As an example, I use the following Apache Camel 2.25.3 route to send messages
<route autoStartup="false" factor:name="TCP SJMS2"
factor:trace="false" id="route-e098a2c8-efd4-41dd-9c1d-57937663cfbe">
<from id="endpoint-cef2b9db-e359-4fb0-aa4d-4afda4f79c10" uri="timer://init?delay=-1&repeatCount=200000">
</from>
<setBody factor:component="SetBodyEndpoint"
factor:custom-name="Установить тело сообщения"
factor:guid="endpoint-361ea09a-9e8a-4f44-a428-05e27dbdf3b5" id="endpoint-361ea09a-9e8a-4f44-a428-05e27dbdf3b5">
<simple><?xml version="1.0" encoding="utf-8"?>
<env:Envelope xmlns:env="http://www.w3.org/2003/05/soap-envelope">
2 kB message body
</env:Envelope></simple>
</setBody>
<to id="endpoint-546af4a0-ebe5-4479-91f0-f6b6609264cc" uri="local2amq://TCP.IN?connectionFactory=%23tcpArtemisCF">
</to>
</route>
<bean class="org.apache.camel.component.sjms2.Sjms2Component" id="local2amq">
<property name="connectionFactory" ref="artemisConnectionFactory"/>
<property name="connectionCount" value="5"/>
</bean>
<bean
class="org.apache.activemq.artemis.jms.client.ActiveMQConnectionFactory"
factor:bean-type="ARTEMIS_CONNECTION_FACTORY" id="tcpArtemisCF" name="tcpArtemisCF">
<property name="brokerURL" value="(tcp://localhost:61717)?blockOnDurableSend=false"/>
</bean>
This route works out quickly, sending 200,000 messages, the speed is somewhere around 6,000 s/s.
But if, after completing this route, go to the broker's queues, there will be only about 80,000 messages in the queue, the rest are added further gradually at a speed of 200 - 2000 s/s
I have not seen such a behavior in a regular ActiveMQ, after the route is completed, all messages are in the queue.
Main questions.
Is this behavior common and expected? What parameters is it regulated by?
How can you see the number of messages that have been sent but are not yet in the queue?
How can you achieve the behavior so that when the route terminates, all messages are written to the queue?
Broker config
<?xml version='1.0'?>
<configuration xmlns="urn:activemq"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:xi="http://www.w3.org/2001/XInclude"
xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">
<core xmlns="urn:activemq:core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:activemq:core ">
<name>0.0.0.0</name>
<persistence-enabled>true</persistence-enabled>
<journal-type>NIO</journal-type>
<paging-directory>data/paging</paging-directory>
<bindings-directory>data/bindings</bindings-directory>
<journal-directory>data/journal</journal-directory>
<large-messages-directory>data/large-messages</large-messages-directory>
<journal-datasync>true</journal-datasync>
<journal-min-files>2</journal-min-files>
<journal-pool-files>10</journal-pool-files>
<journal-device-block-size>4096</journal-device-block-size>
<journal-file-size>10M</journal-file-size>
<journal-buffer-timeout>836000</journal-buffer-timeout>
<journal-max-io>1</journal-max-io>
<disk-scan-period>5000</disk-scan-period>
<max-disk-usage>100</max-disk-usage>
<critical-analyzer>true</critical-analyzer>
<critical-analyzer-timeout>120000</critical-analyzer-timeout>
<critical-analyzer-check-period>60000</critical-analyzer-check-period>
<critical-analyzer-policy>HALT</critical-analyzer-policy>
<page-sync-timeout>836000</page-sync-timeout>
<acceptors>
<acceptor name="artemis">tcp://0.0.0.0:61717</acceptor>
</acceptors>
<security-settings>
<security-setting match="#">
<permission type="createNonDurableQueue" roles="amq"/>
<permission type="deleteNonDurableQueue" roles="amq"/>
<permission type="createDurableQueue" roles="amq"/>
<permission type="deleteDurableQueue" roles="amq"/>
<permission type="createAddress" roles="amq"/>
<permission type="deleteAddress" roles="amq"/>
<permission type="consume" roles="amq"/>
<permission type="browse" roles="amq"/>
<permission type="send" roles="amq"/>
<!-- we need this otherwise ./artemis data imp wouldn't work -->
<permission type="manage" roles="amq"/>
</security-setting>
</security-settings>
<address-settings>
<!-- if you define auto-create on certain queues, management has to be auto-create -->
<address-setting match="activemq.management#">
<dead-letter-address>DLQ</dead-letter-address>
<expiry-address>ExpiryQueue</expiry-address>
<redelivery-delay>0</redelivery-delay>
<!-- with -1 only the global-max-size is in use for limiting -->
<max-size-bytes>-1</max-size-bytes>
<message-counter-history-day-limit>10</message-counter-history-day-limit>
<address-full-policy>PAGE</address-full-policy>
<auto-create-queues>true</auto-create-queues>
<auto-create-addresses>true</auto-create-addresses>
<auto-create-jms-queues>true</auto-create-jms-queues>
<auto-create-jms-topics>true</auto-create-jms-topics>
</address-setting>
<!--default for catch all-->
<address-setting match="#">
<dead-letter-address>DLQ</dead-letter-address>
<expiry-address>ExpiryQueue</expiry-address>
<redelivery-delay>0</redelivery-delay>
<!-- with -1 only the global-max-size is in use for limiting -->
<max-size-bytes>-1</max-size-bytes>
<message-counter-history-day-limit>10</message-counter-history-day-limit>
<address-full-policy>PAGE</address-full-policy>
<auto-create-queues>true</auto-create-queues>
<auto-create-addresses>true</auto-create-addresses>
<auto-create-jms-queues>true</auto-create-jms-queues>
<auto-create-jms-topics>true</auto-create-jms-topics>
</address-setting>
</address-settings>
<addresses>
<address name="DLQ">
<anycast>
<queue name="DLQ" />
</anycast>
</address>
<address name="ExpiryQueue">
<anycast>
<queue name="ExpiryQueue" />
</anycast>
</address>
</addresses>
</core>
</configuration>
Broker logs
17:58:19,887 INFO [org.apache.activemq.artemis.integration.bootstrap] AMQ101000: Starting ActiveMQ Artemis Server
2021-04-05 17:58:19,926 INFO [org.apache.activemq.artemis.core.server] AMQ221000: live Message Broker is starting with configuration Broker Configuration (clustered=false,journalDirectory=data/journal,bindingsDirectory=data/bindings,largeMessagesDirectory=data/large-messages,pagingDirectory=data/paging)
2021-04-05 17:58:19,958 INFO [org.apache.activemq.artemis.core.server] AMQ221013: Using NIO Journal
2021-04-05 17:58:20,038 INFO [org.apache.activemq.artemis.core.server] AMQ221043: Protocol module found: [artemis-server]. Adding protocol support for: CORE
2021-04-05 17:58:20,039 INFO [org.apache.activemq.artemis.core.server] AMQ221043: Protocol module found: [artemis-amqp-protocol]. Adding protocol support for: AMQP
2021-04-05 17:58:20,041 INFO [org.apache.activemq.artemis.core.server] AMQ221043: Protocol module found: [artemis-hornetq-protocol]. Adding protocol support for: HORNETQ
2021-04-05 17:58:20,047 INFO [org.apache.activemq.artemis.core.server] AMQ221043: Protocol module found: [artemis-mqtt-protocol]. Adding protocol support for: MQTT
2021-04-05 17:58:20,047 INFO [org.apache.activemq.artemis.core.server] AMQ221043: Protocol module found: [artemis-openwire-protocol]. Adding protocol support for: OPENWIRE
2021-04-05 17:58:20,048 INFO [org.apache.activemq.artemis.core.server] AMQ221043: Protocol module found: [artemis-stomp-protocol]. Adding protocol support for: STOMP
2021-04-05 17:58:20,163 INFO [org.apache.activemq.artemis.core.server] AMQ221034: Waiting indefinitely to obtain live lock
2021-04-05 17:58:20,163 INFO [org.apache.activemq.artemis.core.server] AMQ221035: Live Server Obtained live lock
2021-04-05 17:58:21,867 INFO [org.apache.activemq.artemis.core.server] AMQ221080: Deploying address DLQ supporting [ANYCAST]
2021-04-05 17:58:21,869 INFO [org.apache.activemq.artemis.core.server] AMQ221003: Deploying ANYCAST queue DLQ on address DLQ
2021-04-05 17:58:21,876 INFO [org.apache.activemq.artemis.core.server] AMQ221080: Deploying address ExpiryQueue supporting [ANYCAST]
2021-04-05 17:58:21,877 INFO [org.apache.activemq.artemis.core.server] AMQ221003: Deploying ANYCAST queue ExpiryQueue on address ExpiryQueue
2021-04-05 17:58:22,686 INFO [org.apache.activemq.artemis.core.server] AMQ221020: Started NIO Acceptor at 0.0.0.0:61717 for protocols [CORE,MQTT,AMQP,HORNETQ,STOMP,OPENWIRE]
2021-04-05 17:58:22,797 INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live
2021-04-05 17:58:22,798 INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.17.0 [0.0.0.0, nodeID=024cff0e-8ff2-11eb-8968-c0b6f9f8ba29]
2021-04-05 17:58:23,113 INFO [org.apache.activemq.hawtio.branding.PluginContextListener] Initialized activemq-branding plugin
2021-04-05 17:58:23,250 INFO [org.apache.activemq.hawtio.plugin.PluginContextListener] Initialized artemis-plugin plugin
2021-04-05 17:58:24,336 INFO [io.hawt.HawtioContextListener] Initialising hawtio services
2021-04-05 17:58:24,349 INFO [io.hawt.system.ConfigManager] Configuration will be discovered via system properties
2021-04-05 17:58:24,352 INFO [io.hawt.jmx.JmxTreeWatcher] Welcome to Hawtio 2.11.0
2021-04-05 17:58:24,359 INFO [io.hawt.web.auth.AuthenticationConfiguration] Starting hawtio authentication filter, JAAS realm: "activemq" authorized role(s): "amq" role principal classes: "org.apache.activemq.artemis.spi.core.security.jaas.RolePrincipal"
2021-04-05 17:58:24,378 INFO [io.hawt.web.proxy.ProxyServlet] Proxy servlet is disabled
2021-04-05 17:58:24,385 INFO [io.hawt.web.servlets.JolokiaConfiguredAgentServlet] Jolokia overridden property: [key=policyLocation, value=file:/D:/Documents/apache-artemis-2.17.0/bin/emptyNew/etc/\jolokia-access.xml]
2021-04-05 17:58:24,712 INFO [org.apache.activemq.artemis] AMQ241001: HTTP Server started at http://localhost:8161
2021-04-05 17:58:24,713 INFO [org.apache.activemq.artemis] AMQ241002: Artemis Jolokia REST API available at http://localhost:8161/console/jolokia
2021-04-05 17:58:24,714 INFO [org.apache.activemq.artemis] AMQ241004: Artemis Console available at http://localhost:8161/console
2021-04-05 17:59:08,763 INFO [io.hawt.web.auth.LoginServlet] Hawtio login is using 1800 sec. HttpSession timeout
2021-04-05 17:59:10,512 INFO [io.hawt.web.auth.LoginServlet] Logging in user: root
2021-04-05 17:59:11,206 INFO [io.hawt.web.auth.keycloak.KeycloakServlet] Keycloak integration is disabled
UPDATE
data for regular ActiveMQ version 5.15.11
Camel route
<route autoStartup="false" factor:name="TCP SJMS2"
factor:trace="false" id="route-e098a2c8-efd4-41dd-9c1d-57937663cfbe">
<from id="endpoint-cef2b9db-e359-4fb0-aa4d-4afda4f79c10" uri="timer://init?delay=-1&repeatCount=200000">
</from>
<setBody factor:component="SetBodyEndpoint"
factor:custom-name="Установить тело сообщения"
factor:guid="endpoint-361ea09a-9e8a-4f44-a428-05e27dbdf3b5" id="endpoint-361ea09a-9e8a-4f44-a428-05e27dbdf3b5">
<simple><?xml version="1.0" encoding="utf-8"?>
<env:Envelope xmlns:env="http://www.w3.org/2003/05/soap-envelope">
2 kB message body
</env:Envelope></simple>
</setBody>
<to id="endpoint-f697cad9-90db-47b9-877f-4189febdd010" uri="localmq://PERF.IN?connectionFactory=%23tcpActiveMQCF">
</to>
</route>
<bean class="org.apache.activemq.camel.component.ActiveMQComponent" id="localmq">
<property name="configuration" ref="jmsConfig"/>
</bean>
<bean class="org.apache.camel.component.jms.JmsConfiguration" id="jmsConfig">
<property name="asyncStartListener" value="true"/>
<property name="cacheLevelName" value="CACHE_CONSUMER"/>
<property name="preserveMessageQos" value="true"/>
</bean>
<bean class="org.apache.activemq.pool.PooledConnectionFactory"
destroy-method="stop" factor:bean-type="AMQ_CONNECTION_FACTORY"
id="tcpActiveMQCF" init-method="start" name="tcpActiveMQCF">
<property name="maxConnections" value="1"/>
<property name="maximumActiveSessionPerConnection" value="15"/>
<property name="connectionFactory">
<bean class="org.apache.activemq.ActiveMQConnectionFactory"
id="tcpActiveMQCF_connection" name="tcpActiveMQCF_connection">
<property name="brokerURL" value="tcp://localhost:61616?jms.useAsyncSend=false"/>
</bean>
</property>
</bean>
For this route, I get a speed of about 2500 s/s and messages are immediately written to the queue, while changing the parameter jms.useAsyncSend practically does not affect performance in my case.
This kind of behavior is expected when sending non-durable messages because non-durable messages are sent in a non-blocking manner. It's not clear whether or not you're sending non-durable messages, but you've also set blockOnDurableSend=false on your client's URL so even durable messages will be sent non-blocking.
From the broker's perspective the messages haven't actually arrived so there's no way to see the number of messages that have been sent but are not yet in the queue.
If you want to ensure that when the Camel route terminates all messages are written to the queue then you should send durable messages and set blockOnDurableSend=true (which is the default value).
Keep in mind that blocking will reduce performance (potentially substantially) based on the speed of you hard disk. This is because the client will have to wait for a response from the broker for every message it sends, and for every message the broker receives it will have to persist that message to disk and wait for the hard disk to sync before it sends a response back to the client. Therefore, if your hard disk can't sync quickly the client will have to wait a long time relatively speaking.
One of the configuration parameters that influences this behavior is journal-buffer-timeout. This value is calculated automatically and set when the broker instance is first created. You'll see evidence of this logged, e.g.:
Auto tuning journal ...
done! Your system can make 250 writes per millisecond, your journal-buffer-timeout will be 4000
In your case the journal-buffer-timeout has been set to 836000 which is quite slow (the higher the timeout the slower the disk). It means that you disk can only make around 1.2 writes per millisecond. If you think this value is in error you can run the artemis perf-journal command to calculate it again and update the configuration accordingly.
To give you a comparison, my journal-buffer-timeout is 4000, and I can run the artemis producer --protocol amqp command with ActiveMQ Artemis which will send 1,000 durable messages in less than 700 milliseconds after a few runs. If I use the --non-persistent flag that duration drops down to around 200 milliseconds.
If I perform the same test on a default installation of ActiveMQ 5.16.0 it takes around 900 and 200 milliseconds respectively which is not terribly surprising given the nature of the test.
It's worth noting that ActiveMQ 5.x has the jms.useAsyncSend parameter that is functionally equivalent to blockOnDurableSend and blockOnNonDurableSend in Artemis. However, you're unlikely to see as much of a difference if you use it because the 5.x broker has a lot of inherent internal blocking whereas Artemis was written from the ground up to be completely non-blocking. The potential performance ceiling of Artemis is therefore much higher than 5.x, and that's one of the main reasons that Artemis exists in the first place.
Remember that by blocking or not you're really just trading reliability for speed respectively. By not blocking you're telling the client to "fire and forget." The client, by definition, will have no knowledge of whether or not the message is actually successfully received by the broker. In other words, sending message is a non-blocking way is inherently unreliable from the client's perspective. That's the fundamental trade-off you make for speed. JMS 2 added javax.jms.CompletionListener to help mitigate this a bit, but it's unlikely that any Camel JMS component makes intelligent use of this.

ActiveMQ embedded bridge to Camel JMS bridge

I have an old application which handle JMS messages with ActiveMQ 5.8.0 and some JNDI remote topic connected to this ActiveMQ.
I have a connector like that :
<bean class="org.apache.activemq.network.jms.JmsConnector">
<property name="outboundTopicConnectionFactory" ref="jmsConnectionFactoryTo" />
<property name="outboundClientId" value="${remote.clientId}" />
<property name="jndiOutboundTemplate" ref="jndiTemplateTo" />
<property name="preferJndiDestinationLookup" value="true" />
<property name="inboundTopicBridges">
<list>
<bean class="org.apache.activemq.network.jms.InboundTopicBridge">
<property name="inboundTopicName" value="${remote.topic.to}"/>
<property name="localTopicName" value="${local.topic.to}"/>
<property name="consumerName" value="${remote.consumer.name}"/>
<property name="selector" value="${remote.selector}"/>
</bean>
</list>
</property>
</bean>
It works great, but now, for some technical reasons (strict JMS 1.1), I need to use "ConnectionFactory" instead of "TopicConnectionFactory".
With the actual configuration, I'm stuck because ActiveMQ seems to use "TopicConnectionFactory" instead of "ConnectionFactory", and my new class "MyConnectionFactoryImpl" implements "ConnectionFactory" now :
nested exception is org.springframework.beans.ConversionNotSupportedException:
Failed to convert property value of type 'com.webmethods.jms.impl.MyConnectionFactoryImpl'
to required type 'javax.jms.TopicConnectionFactory'
for property 'outboundTopicConnectionFactory';
nested exception is java.lang.IllegalStateException:
Cannot convert value of type [com.webmethods.jms.impl.MyConnectionFactoryImpl]
to required type [javax.jms.TopicConnectionFactory] for property 'outboundTopicConnectionFactory':
no matching editors or conversion strategy found
In "org.apache.activemq.network.jms.JmsConnector" class, it use everywhere "TopicConnectionFactory", which is not recommended anymore in JMS 1.1.
EDIT :
According to #Justin Bertram, I need to use Camel instead of ActiveMQ embedded bridge. But I can't find any example of XML configuration which I can use to replace my actual two beans JMSConnector. Which is the simple way to do this keeping my XML config files ?
As the documentation for the JMS to JMS Bridge (i.e. org.apache.activemq.network.jms.JmsConnector) states:
ActiveMQ provides bridging functionality to other JMS providers that implement the JMS 1.0.2 and above specification.
In other words, the whole goal of the JMS to JMS Bridge is to use the JMS 1.0.2 interface(s). Changing it so that it only used JMS 1.1 would defeat the purpose.
The documentation also states that you should use Camel instead of the JMS to JMS Bridge:
Warning, try Camel first!
Note that we recommend you look at using Apache Camel for bridging ActiveMQ to or from any message broker (or indeed any other technology, protocol or middleware) as its much easier to:
keep things flexible; its very easy to map different queue/topic to one or more queues or topics on the other provider
perform content based routing, filtering and other Enterprise Integration Patterns
allows you to work with any technology, protocol or middleware, not just JMS providers
Therefore I recommend you use Camel instead of org.apache.activemq.network.jms.JmsConnector.
I would think that having your code return a TopicConnectionFactory would be the simplest solution. Even the JMS 2.0 specification provides the TopicConnectionFactory. No matter what version of ActiveMQ you are using, you certainly have the option of using the TopicConnectionFactory in your code and providing that to your bridge.
Note that the Camel route:
<camelContext xmlns="http://camel.apache.org/schema/spring">
<route>
<from uri="mqseries:Foo.Bar"/>
<to uri="activemq:Cheese"/>
</route>
</camelContext>
has no error handling. For example, if the 'to' endpoint is down, this route will read from the 'from' endpoint and just throw the messages on the floor. Furthermore, if the 'to' component is not configured to use a caching/pooling connection factory, then a new JMS connection will be created for each message sent. This has poor performance and can result in many sockets in the TIME_WAIT state. Bottom line - beware trivial Camel routes.

How to connect to IBM MQ from a Camel route with SSL connection?

I could successfully connect to IBM MQ from a camel route and initialize the connection factory bean but now I want to connect with SSL.
I create the key store on the server side for the queue manager and create the certificate and add it to it.
I create a trust store on the client side and add the certificate to it.
And now I want the MQ connection factory to refer to the trust store while connecting to the server.
Here is what I tried:
<bean id="MyConnectionFactory" class="com.ibm.mq.jms.MQQueueConnectionFactory">
<property name="transportType" value="${queue.transportType}" />
<property name="channel" value="${queue.channel}" />
<property name="hostName" value="${queue.hostName}" />
<property name="port" value="${queue.port}" />
<property name="queueManager" value="${queue.manager}" />
<property name="sSLCipherSuite" value="SSL_RSA_WITH_NULL_MD5" />
<property name="sSLCertStores" value="file:C:/Servers/TrustStore/truststore.jks" />
</bean>
But this doesn't work. The following exception was returned:
JMSWMQ0018: Failed to connect to queue manager 'QM_TEST_SSL'
with connection mode 'Client' and host name '10.3.13.161(1415)'.;
nested exception is com.ibm.mq.MQException: JMSCMQ0001:
WebSphere MQ call failed with compcode '2' ('MQCC_FAILED')
reason '2397' ('MQRC_JSSE_ERROR').
Can anyone please help to direct me how to do that?
From a security standpoint the Client should only receive generic error messages which could relate to a number of problems. The best place to find out exactly why you client was rejected is the Queue Manager logs. I would suggest looking there to see if there are any errors that help you further determine the problem.
From the info given i can think of 3 problems it could be:
The Queue Manager channel is set with an attribute of SSLCAUTH(REQUIRED) however from the description you've given here the client doesn't appear to be using it's own certificate to connect. SSLCAUTH(REQUIRED) will mean that the Queue Manager will only accept connections on the particular channel where the client is connecting with a certificate it trusts. Check the channel definition and set SSLCAUTH(OPTIONAL)
Depending on your version of IBM MQ the CipherSpec you have used (SSL_RSA_WITH_NULL_MD5) is considered weak and will not be accepted by default. You can reenable these deprecated CipherSpecs and the instructions on how to do so can be found on the following Knowledge Center page
The truststore "C:/Servers/TrustStore/truststore.jks" is not being picked up by the client and so the client cannot trust the Queue Manager's certificate. Double check the path you have supplied and remove the "file:" you have added to the path (unless you were specifically instructed to include it).
You do not state which version of IBM MQ or what JRE that you are using, if it is not the most current version of IBM MQ and is being used with an Oracle JRE then the APAR IT10837 may help here.
There is a good write up of the above APAR at the end of IBM developerWorks blog "MQ Java, TLS Ciphers, Non-IBM JREs & APARs IT06775, IV66840, IT09423, IT10837 -- HELP ME PLEASE!" posted by Tom Leend. He includes a work around for Java clients that do not have this fix.
APAR IT10837
I've got one final APAR to mention and that is IT10837 (targeted for V7.1.0.8 and V7.5.0.7 and shipped in V8.0.0.5). This APAR affects applications running within Oracle JREs that use TLS CipherSuites to connect to a queue manager where the server-connection channel being used has the SSLCAUTH attributed set to "REQUIRED" (the default value). This means that the client should pass a certificate to the queue manager such that the connecting client can be authenticated by the MQ server.
When the application was running in an Oracle JRE, the SunJSSE provider was not creating a default internal Key Manager object for TLS socket connections, meaning that the client's signed personal certificates were not available for client authentication during the handshake. The IBM JSSE provider does do this based off the information passed via the Java System Properties:
javax.net.ssl.keyStore
and
javax.net.ssl.keyStorePassword
Because a KeyManager object was not created by default, the client certificate was not passed to the queue manager (GSKit) for authentication. As such, the connection from the application would failed. In this scenario, the queue manager would write the following error message into its error log file:
AMQ9637 (Channel is lacking a certificate)
The fix for this APAR is for the MQ classes for JMS and classes for Java to read a certificate keystore, based on the information in the two Java System Properties noted above, and create a KeyManager based on that information in the case when com.ibm.mq.cfg.useIBMCipherMappings is set to the value false in the JVM . This can then be used when the SSLContext is created (which is subsequently used to create an SSLSocketFactory and eventually a secure socket object).
There is a local workaround which is for the application itself to create
TrustManagerFactory and KeyManagerFactory factory objects for the appropriate certificate stores and to initialise an SSLContext object these objects. From this SSLContext object and SSLSocketFactory can be created and passed to the MQ classes for JMS (by setting it on the JMS Connection Factory) or to the classes for Java (by setting it on the MQEnvironment or in a Hashtable passed to the MQQueueManager constructor). For example:
---- Code Snippet Start ----
KeyStore keyStore = KeyStore.getInstance("JKS");
java.io.FileInputStream keyStoreInputStream = new java.io.FileInputStream("/home/tom/myKeyStore.jks");
keyStore.load (keyStoreInputStream, password_char_array);
KeyStore trustStore trustStore = KeyStore.getInstance ("JKS");
java.io.FileInputStream trustStoreInputStream = new java.io.FileInputStream("/home/tom/myTrustStore.jks");
trustStore.load (trustStoreInputStream, password_char_array);
keyStoreInputStream.close();
trustStoreInputStream.close();
KeyManagerFactory keyManagerFactory =
KeyManagerFactory.getInstance(KeyManagerFactory.getDefaultAlgorithm());
TrustManagerFactory trustManagerFactory =
TrustManagerFactory.getInstance(TrustManagerFactory.getDefaultAlgorithm());
keyManagerFactory.init(keyStore,password);
trustManagerFactory.init(trustStore);
SSLContext sslContext = SSLContext.getInstance("TLSv1");
sslContext.init(keyManagerFactory.getKeyManagers(),
trustManagerFactory.getTrustManagers(),
null);
SSLSocketFactory sslSocketFactory = sslContext.getSocketFactory();
// classes for JMS
myJmsConnectionFactory.setObjectProperty(
WMQConstants.WMQ_SSL_SOCKET_FACTORY, sslSocketFactory);
// classes for Java
MQEnvironment.sslSocketFactory = sslSocketFactory;
---- Code Snippet End ----

Protect CXF Service in Fuse with Basic Authentication on LDAP Users

I have a SOAP/REST service implemented in CXF inside Red Hat JBoss Fuse (in a Fabric).
I need to protect it with Basic Authentication, and credentials must be checked on a LDAP server.
Can this be done without a custom interceptor?
Can I maybe use the container JAAS security (configured with LDAP) to protect the service the same way I can protect the console?
Yes the container JAAS security realm can be used to protect a web service.
An example is here.
The example page doesn't explain the implementation, but a quick look at the blueprint.xml file reveals the following configuration:
<jaxrs:server id="customerService" address="/securecrm">
<jaxrs:serviceBeans>
<ref component-id="customerSvc"/>
</jaxrs:serviceBeans>
<jaxrs:providers>
<ref component-id="authenticationFilter"/>
</jaxrs:providers>
</jaxrs:server>
<bean id="authenticationFilter" class="org.apache.cxf.jaxrs.security.JAASAuthenticationFilter">
<!-- Name of the JAAS Context -->
<property name="contextName" value="karaf"/>
</bean>
So it's just a matter of configuring a JAAS authentication filter.
"karaf" is the default JAAS realm for the container: users are defined in etc/users.properties
To define more realms, info is here.
To have users on LDAP, see here.
The answer above is correct, but please note that for more recent versions of Fuse (past 6.1), the "rank" in the LDAP configuration must be greater than 100 in order to override the default karaf realm.
Also, with current patches applied, in Fuse 6.2.X, connection pooling for the LDAP connections can be enabled:
<!-- LDAP connection pooling -->
<!-- http://docs.oracle.com/javase/jndi/tutorial/ldap/connect/pool.html -->
<!-- http://docs.oracle.com/javase/jndi/tutorial/ldap/connect/config.html -->
context.com.sun.jndi.ldap.connect.pool=true
</jaas:module>
</jaas:config>
This is very important for high volume web-services. A connection pool is maintained to the LDAP server. This both avoids connection creation overhead and having closing sockets lingering in TIME-WAIT state.

Camel + Java DSL Fluent builder with real ActiveMQ Broker

I'm trying to implement a WireTap with Java DSL Fluent Builders, which gives the following example code snippet.
from("direct:start")
.to("log:foo")
.wireTap("direct:tap")
.to("mock:result");
This works if I run a mock example (e.g. camel-example-jms-file). However if I take the sample code and try to substitute a real Broker instance and Queue to replace the mock objects it fails with error below.
from("tcp://localhost:61616")
.to("ativemq:atsUpdateQueue")
.wireTap("activemq:fdmCaptureQueue");
Then it fails
org.apache.camel.FailedToCreateRouteException: Failed to create route route2: Route(route2)[[From[tcp://localhost:61616?queue=atsUpdateQue... because of Failed to resolve endpoint: tcp://localhost:61616?queue=atsUpdateQueue due to: No component found with scheme: tcp
I've googled extensively and all the example I've found use the virtual mock queues none seem to illustrate working with a real broker and but I cannot find any documentation on the URI specification for camel.
The important part of the error message describes the problem No component found with scheme: tcp, This is becasuse there is no "tcp" component for camel, however you can use the netty component if you want to interact with a tcp endpoint:
from("netty:tcp://localhost:61616")
more info here - http://camel.apache.org/netty.html
"tcp://localhost:61616" looks like the activemq broker address.
You need to setup the broker address to activemq component in Java DSL
camelContext.addComponent("activemq", activeMQComponent("tcp://localhost:61616"));
or in spring configuration file
<bean id="activemq"
class="org.apache.activemq.camel.component.ActiveMQComponent">
<property name="brokerURL" value="tcp://somehost:61616"/>
</bean>
You can find more information about camel-activemq here
Thank you for the suggestions, while useful in increasing my understanding neither actually resolved my problem. My code was wrong and for the benefit of others I should have been using the following names.
final String sourceQueue = "activemq:queue:atsUpdateQueue";
final String destinationQueue = "activemq:queue:atsEndPoint";
final String wiretapQueue = "activemq:queue:fdmCaptureQueue";
from(sourceQueue).wireTap(wiretapQueue).copy().to(destinationQueue);

Resources