ActiveMQ stops 'talking' - apache-camel

We have an OSGi (Karaf 4.0.5) server that uses Camel 2.15.4 & ActiveMQ 5.12.0 for remote calls to services within the server.
After the server has been up for several weeks ActiveMQ sometimes appears to stop taking any new connections from outside the server. Remote clients attempt to make connections, but they eventually just time out.
The problem is we don't even know where to start looking for what the problem might be. There are no specific errors in the server log that seem to pertain to ActiveMQ or Camel.
We can log in to the Karaf console and do a activemq:bstat, but we have hundreds of queues for all the services, so it's like a needle in a haystack, and we don't know what the needle looks like.
Can someone please tell us what things we should look out for?
ActiveMQ XML config:
<broker xmlns="http://activemq.apache.org/schema/core"
useJmx="true" brokerName="${broker-name}" dataDirectory="${data}" start="false">
<destinationPolicy>
<policyMap>
<policyEntries>
<policyEntry topic=">" producerFlowControl="true" memoryLimit="1mb">
<pendingSubscriberPolicy>
<vmCursor/>
</pendingSubscriberPolicy>
</policyEntry>
<policyEntry queue=">" producerFlowControl="true" memoryLimit="1mb">
</policyEntry>
</policyEntries>
</policyMap>
</destinationPolicy>
<managementContext>
<managementContext createConnector="false"/>
</managementContext>
<persistenceAdapter>
<kahaDB directory="${data}/kahadb"/>
</persistenceAdapter>
<plugins>
<timeStampingBrokerPlugin futureOnly="true"/>
</plugins>
<transportConnectors>
<transportConnector name="tcp" uri="tcp://localhost:4210"/>
</transportConnectors>
</broker>

Related

Messages are written to the queue after complete execution and the producer has stopped

I was faced with a broker's (ActiveMQ-Artemis version 2.17.0) behavior unusual for me.
With a large number of messages and when they are quickly sent by the manufacturer, some of the messages reach the queue after the complete execution and the manufacturer has stopped. This is especially evident when the hard drive is normal, not SSD.
As an example, I use the following Apache Camel 2.25.3 route to send messages
<route autoStartup="false" factor:name="TCP SJMS2"
factor:trace="false" id="route-e098a2c8-efd4-41dd-9c1d-57937663cfbe">
<from id="endpoint-cef2b9db-e359-4fb0-aa4d-4afda4f79c10" uri="timer://init?delay=-1&repeatCount=200000">
</from>
<setBody factor:component="SetBodyEndpoint"
factor:custom-name="Установить тело сообщения"
factor:guid="endpoint-361ea09a-9e8a-4f44-a428-05e27dbdf3b5" id="endpoint-361ea09a-9e8a-4f44-a428-05e27dbdf3b5">
<simple><?xml version="1.0" encoding="utf-8"?>
<env:Envelope xmlns:env="http://www.w3.org/2003/05/soap-envelope">
2 kB message body
</env:Envelope></simple>
</setBody>
<to id="endpoint-546af4a0-ebe5-4479-91f0-f6b6609264cc" uri="local2amq://TCP.IN?connectionFactory=%23tcpArtemisCF">
</to>
</route>
<bean class="org.apache.camel.component.sjms2.Sjms2Component" id="local2amq">
<property name="connectionFactory" ref="artemisConnectionFactory"/>
<property name="connectionCount" value="5"/>
</bean>
<bean
class="org.apache.activemq.artemis.jms.client.ActiveMQConnectionFactory"
factor:bean-type="ARTEMIS_CONNECTION_FACTORY" id="tcpArtemisCF" name="tcpArtemisCF">
<property name="brokerURL" value="(tcp://localhost:61717)?blockOnDurableSend=false"/>
</bean>
This route works out quickly, sending 200,000 messages, the speed is somewhere around 6,000 s/s.
But if, after completing this route, go to the broker's queues, there will be only about 80,000 messages in the queue, the rest are added further gradually at a speed of 200 - 2000 s/s
I have not seen such a behavior in a regular ActiveMQ, after the route is completed, all messages are in the queue.
Main questions.
Is this behavior common and expected? What parameters is it regulated by?
How can you see the number of messages that have been sent but are not yet in the queue?
How can you achieve the behavior so that when the route terminates, all messages are written to the queue?
Broker config
<?xml version='1.0'?>
<configuration xmlns="urn:activemq"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:xi="http://www.w3.org/2001/XInclude"
xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">
<core xmlns="urn:activemq:core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:activemq:core ">
<name>0.0.0.0</name>
<persistence-enabled>true</persistence-enabled>
<journal-type>NIO</journal-type>
<paging-directory>data/paging</paging-directory>
<bindings-directory>data/bindings</bindings-directory>
<journal-directory>data/journal</journal-directory>
<large-messages-directory>data/large-messages</large-messages-directory>
<journal-datasync>true</journal-datasync>
<journal-min-files>2</journal-min-files>
<journal-pool-files>10</journal-pool-files>
<journal-device-block-size>4096</journal-device-block-size>
<journal-file-size>10M</journal-file-size>
<journal-buffer-timeout>836000</journal-buffer-timeout>
<journal-max-io>1</journal-max-io>
<disk-scan-period>5000</disk-scan-period>
<max-disk-usage>100</max-disk-usage>
<critical-analyzer>true</critical-analyzer>
<critical-analyzer-timeout>120000</critical-analyzer-timeout>
<critical-analyzer-check-period>60000</critical-analyzer-check-period>
<critical-analyzer-policy>HALT</critical-analyzer-policy>
<page-sync-timeout>836000</page-sync-timeout>
<acceptors>
<acceptor name="artemis">tcp://0.0.0.0:61717</acceptor>
</acceptors>
<security-settings>
<security-setting match="#">
<permission type="createNonDurableQueue" roles="amq"/>
<permission type="deleteNonDurableQueue" roles="amq"/>
<permission type="createDurableQueue" roles="amq"/>
<permission type="deleteDurableQueue" roles="amq"/>
<permission type="createAddress" roles="amq"/>
<permission type="deleteAddress" roles="amq"/>
<permission type="consume" roles="amq"/>
<permission type="browse" roles="amq"/>
<permission type="send" roles="amq"/>
<!-- we need this otherwise ./artemis data imp wouldn't work -->
<permission type="manage" roles="amq"/>
</security-setting>
</security-settings>
<address-settings>
<!-- if you define auto-create on certain queues, management has to be auto-create -->
<address-setting match="activemq.management#">
<dead-letter-address>DLQ</dead-letter-address>
<expiry-address>ExpiryQueue</expiry-address>
<redelivery-delay>0</redelivery-delay>
<!-- with -1 only the global-max-size is in use for limiting -->
<max-size-bytes>-1</max-size-bytes>
<message-counter-history-day-limit>10</message-counter-history-day-limit>
<address-full-policy>PAGE</address-full-policy>
<auto-create-queues>true</auto-create-queues>
<auto-create-addresses>true</auto-create-addresses>
<auto-create-jms-queues>true</auto-create-jms-queues>
<auto-create-jms-topics>true</auto-create-jms-topics>
</address-setting>
<!--default for catch all-->
<address-setting match="#">
<dead-letter-address>DLQ</dead-letter-address>
<expiry-address>ExpiryQueue</expiry-address>
<redelivery-delay>0</redelivery-delay>
<!-- with -1 only the global-max-size is in use for limiting -->
<max-size-bytes>-1</max-size-bytes>
<message-counter-history-day-limit>10</message-counter-history-day-limit>
<address-full-policy>PAGE</address-full-policy>
<auto-create-queues>true</auto-create-queues>
<auto-create-addresses>true</auto-create-addresses>
<auto-create-jms-queues>true</auto-create-jms-queues>
<auto-create-jms-topics>true</auto-create-jms-topics>
</address-setting>
</address-settings>
<addresses>
<address name="DLQ">
<anycast>
<queue name="DLQ" />
</anycast>
</address>
<address name="ExpiryQueue">
<anycast>
<queue name="ExpiryQueue" />
</anycast>
</address>
</addresses>
</core>
</configuration>
Broker logs
17:58:19,887 INFO [org.apache.activemq.artemis.integration.bootstrap] AMQ101000: Starting ActiveMQ Artemis Server
2021-04-05 17:58:19,926 INFO [org.apache.activemq.artemis.core.server] AMQ221000: live Message Broker is starting with configuration Broker Configuration (clustered=false,journalDirectory=data/journal,bindingsDirectory=data/bindings,largeMessagesDirectory=data/large-messages,pagingDirectory=data/paging)
2021-04-05 17:58:19,958 INFO [org.apache.activemq.artemis.core.server] AMQ221013: Using NIO Journal
2021-04-05 17:58:20,038 INFO [org.apache.activemq.artemis.core.server] AMQ221043: Protocol module found: [artemis-server]. Adding protocol support for: CORE
2021-04-05 17:58:20,039 INFO [org.apache.activemq.artemis.core.server] AMQ221043: Protocol module found: [artemis-amqp-protocol]. Adding protocol support for: AMQP
2021-04-05 17:58:20,041 INFO [org.apache.activemq.artemis.core.server] AMQ221043: Protocol module found: [artemis-hornetq-protocol]. Adding protocol support for: HORNETQ
2021-04-05 17:58:20,047 INFO [org.apache.activemq.artemis.core.server] AMQ221043: Protocol module found: [artemis-mqtt-protocol]. Adding protocol support for: MQTT
2021-04-05 17:58:20,047 INFO [org.apache.activemq.artemis.core.server] AMQ221043: Protocol module found: [artemis-openwire-protocol]. Adding protocol support for: OPENWIRE
2021-04-05 17:58:20,048 INFO [org.apache.activemq.artemis.core.server] AMQ221043: Protocol module found: [artemis-stomp-protocol]. Adding protocol support for: STOMP
2021-04-05 17:58:20,163 INFO [org.apache.activemq.artemis.core.server] AMQ221034: Waiting indefinitely to obtain live lock
2021-04-05 17:58:20,163 INFO [org.apache.activemq.artemis.core.server] AMQ221035: Live Server Obtained live lock
2021-04-05 17:58:21,867 INFO [org.apache.activemq.artemis.core.server] AMQ221080: Deploying address DLQ supporting [ANYCAST]
2021-04-05 17:58:21,869 INFO [org.apache.activemq.artemis.core.server] AMQ221003: Deploying ANYCAST queue DLQ on address DLQ
2021-04-05 17:58:21,876 INFO [org.apache.activemq.artemis.core.server] AMQ221080: Deploying address ExpiryQueue supporting [ANYCAST]
2021-04-05 17:58:21,877 INFO [org.apache.activemq.artemis.core.server] AMQ221003: Deploying ANYCAST queue ExpiryQueue on address ExpiryQueue
2021-04-05 17:58:22,686 INFO [org.apache.activemq.artemis.core.server] AMQ221020: Started NIO Acceptor at 0.0.0.0:61717 for protocols [CORE,MQTT,AMQP,HORNETQ,STOMP,OPENWIRE]
2021-04-05 17:58:22,797 INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live
2021-04-05 17:58:22,798 INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.17.0 [0.0.0.0, nodeID=024cff0e-8ff2-11eb-8968-c0b6f9f8ba29]
2021-04-05 17:58:23,113 INFO [org.apache.activemq.hawtio.branding.PluginContextListener] Initialized activemq-branding plugin
2021-04-05 17:58:23,250 INFO [org.apache.activemq.hawtio.plugin.PluginContextListener] Initialized artemis-plugin plugin
2021-04-05 17:58:24,336 INFO [io.hawt.HawtioContextListener] Initialising hawtio services
2021-04-05 17:58:24,349 INFO [io.hawt.system.ConfigManager] Configuration will be discovered via system properties
2021-04-05 17:58:24,352 INFO [io.hawt.jmx.JmxTreeWatcher] Welcome to Hawtio 2.11.0
2021-04-05 17:58:24,359 INFO [io.hawt.web.auth.AuthenticationConfiguration] Starting hawtio authentication filter, JAAS realm: "activemq" authorized role(s): "amq" role principal classes: "org.apache.activemq.artemis.spi.core.security.jaas.RolePrincipal"
2021-04-05 17:58:24,378 INFO [io.hawt.web.proxy.ProxyServlet] Proxy servlet is disabled
2021-04-05 17:58:24,385 INFO [io.hawt.web.servlets.JolokiaConfiguredAgentServlet] Jolokia overridden property: [key=policyLocation, value=file:/D:/Documents/apache-artemis-2.17.0/bin/emptyNew/etc/\jolokia-access.xml]
2021-04-05 17:58:24,712 INFO [org.apache.activemq.artemis] AMQ241001: HTTP Server started at http://localhost:8161
2021-04-05 17:58:24,713 INFO [org.apache.activemq.artemis] AMQ241002: Artemis Jolokia REST API available at http://localhost:8161/console/jolokia
2021-04-05 17:58:24,714 INFO [org.apache.activemq.artemis] AMQ241004: Artemis Console available at http://localhost:8161/console
2021-04-05 17:59:08,763 INFO [io.hawt.web.auth.LoginServlet] Hawtio login is using 1800 sec. HttpSession timeout
2021-04-05 17:59:10,512 INFO [io.hawt.web.auth.LoginServlet] Logging in user: root
2021-04-05 17:59:11,206 INFO [io.hawt.web.auth.keycloak.KeycloakServlet] Keycloak integration is disabled
UPDATE
data for regular ActiveMQ version 5.15.11
Camel route
<route autoStartup="false" factor:name="TCP SJMS2"
factor:trace="false" id="route-e098a2c8-efd4-41dd-9c1d-57937663cfbe">
<from id="endpoint-cef2b9db-e359-4fb0-aa4d-4afda4f79c10" uri="timer://init?delay=-1&repeatCount=200000">
</from>
<setBody factor:component="SetBodyEndpoint"
factor:custom-name="Установить тело сообщения"
factor:guid="endpoint-361ea09a-9e8a-4f44-a428-05e27dbdf3b5" id="endpoint-361ea09a-9e8a-4f44-a428-05e27dbdf3b5">
<simple><?xml version="1.0" encoding="utf-8"?>
<env:Envelope xmlns:env="http://www.w3.org/2003/05/soap-envelope">
2 kB message body
</env:Envelope></simple>
</setBody>
<to id="endpoint-f697cad9-90db-47b9-877f-4189febdd010" uri="localmq://PERF.IN?connectionFactory=%23tcpActiveMQCF">
</to>
</route>
<bean class="org.apache.activemq.camel.component.ActiveMQComponent" id="localmq">
<property name="configuration" ref="jmsConfig"/>
</bean>
<bean class="org.apache.camel.component.jms.JmsConfiguration" id="jmsConfig">
<property name="asyncStartListener" value="true"/>
<property name="cacheLevelName" value="CACHE_CONSUMER"/>
<property name="preserveMessageQos" value="true"/>
</bean>
<bean class="org.apache.activemq.pool.PooledConnectionFactory"
destroy-method="stop" factor:bean-type="AMQ_CONNECTION_FACTORY"
id="tcpActiveMQCF" init-method="start" name="tcpActiveMQCF">
<property name="maxConnections" value="1"/>
<property name="maximumActiveSessionPerConnection" value="15"/>
<property name="connectionFactory">
<bean class="org.apache.activemq.ActiveMQConnectionFactory"
id="tcpActiveMQCF_connection" name="tcpActiveMQCF_connection">
<property name="brokerURL" value="tcp://localhost:61616?jms.useAsyncSend=false"/>
</bean>
</property>
</bean>
For this route, I get a speed of about 2500 s/s and messages are immediately written to the queue, while changing the parameter jms.useAsyncSend practically does not affect performance in my case.
This kind of behavior is expected when sending non-durable messages because non-durable messages are sent in a non-blocking manner. It's not clear whether or not you're sending non-durable messages, but you've also set blockOnDurableSend=false on your client's URL so even durable messages will be sent non-blocking.
From the broker's perspective the messages haven't actually arrived so there's no way to see the number of messages that have been sent but are not yet in the queue.
If you want to ensure that when the Camel route terminates all messages are written to the queue then you should send durable messages and set blockOnDurableSend=true (which is the default value).
Keep in mind that blocking will reduce performance (potentially substantially) based on the speed of you hard disk. This is because the client will have to wait for a response from the broker for every message it sends, and for every message the broker receives it will have to persist that message to disk and wait for the hard disk to sync before it sends a response back to the client. Therefore, if your hard disk can't sync quickly the client will have to wait a long time relatively speaking.
One of the configuration parameters that influences this behavior is journal-buffer-timeout. This value is calculated automatically and set when the broker instance is first created. You'll see evidence of this logged, e.g.:
Auto tuning journal ...
done! Your system can make 250 writes per millisecond, your journal-buffer-timeout will be 4000
In your case the journal-buffer-timeout has been set to 836000 which is quite slow (the higher the timeout the slower the disk). It means that you disk can only make around 1.2 writes per millisecond. If you think this value is in error you can run the artemis perf-journal command to calculate it again and update the configuration accordingly.
To give you a comparison, my journal-buffer-timeout is 4000, and I can run the artemis producer --protocol amqp command with ActiveMQ Artemis which will send 1,000 durable messages in less than 700 milliseconds after a few runs. If I use the --non-persistent flag that duration drops down to around 200 milliseconds.
If I perform the same test on a default installation of ActiveMQ 5.16.0 it takes around 900 and 200 milliseconds respectively which is not terribly surprising given the nature of the test.
It's worth noting that ActiveMQ 5.x has the jms.useAsyncSend parameter that is functionally equivalent to blockOnDurableSend and blockOnNonDurableSend in Artemis. However, you're unlikely to see as much of a difference if you use it because the 5.x broker has a lot of inherent internal blocking whereas Artemis was written from the ground up to be completely non-blocking. The potential performance ceiling of Artemis is therefore much higher than 5.x, and that's one of the main reasons that Artemis exists in the first place.
Remember that by blocking or not you're really just trading reliability for speed respectively. By not blocking you're telling the client to "fire and forget." The client, by definition, will have no knowledge of whether or not the message is actually successfully received by the broker. In other words, sending message is a non-blocking way is inherently unreliable from the client's perspective. That's the fundamental trade-off you make for speed. JMS 2 added javax.jms.CompletionListener to help mitigate this a bit, but it's unlikely that any Camel JMS component makes intelligent use of this.

how to do snowfalkes DB JNDI connection in Websphere Liberty application server

Is there a way to configure snowflakes connection pooling in websphere application serve.
I tried below config inside server.xml file. But not working.
<dataSource id="SnowflakeDataSource" jndiName="jdbc/BM_SF" type="javax.sql.DataSource">
<properties db="abcd" schema="_TARGET" URL="jdbc:snowflake://adpdc_cdl.us-east-1.privatelink.snowflakecomputing.com" user="****" password="****" />
<jdbcDriver libraryRef="DatacloudLibs" javax.sql.DataSource="net.snowflake.client.jdbc.SnowflakeBasicDataSource"/>
</dataSource>
To clarify, the configuration that you have configures WebSphere Application Server Liberty's connection pooling for a Snowflake data source, rather than Snowflake's connection pooling.
The configuration that you have looks mostly pretty good.
When I looked up the SnowflakeBasicDataSource class that you are using, I can see that it has a property called "databaseName", not "db", so you'll need to switch that in your configuration.
You will also need to configure one of the jdbc-4.x features in Liberty if you haven't already, and if you plan to look it up in JNDI (vs inject it), you'll need the jndi-1.0 feature.
Here is an example with some corrections:
<featureManager>
<feature>jdbc-4.2</feature>
<feature>jndi-1.0</feature>
... your other features here
</featureManager>
<dataSource id="SnowflakeDataSource" jndiName="jdbc/BM_SF" type="javax.sql.DataSource">
<properties databaseName="abcd" schema="_TARGET" URL="jdbc:snowflake://adpdc_cdl.us-east-1.privatelink.snowflakecomputing.com" user="****" password="****" />
<jdbcDriver libraryRef="DatacloudLibs" javax.sql.DataSource="net.snowflake.client.jdbc.SnowflakeBasicDataSource"/>
</dataSource>
If this still doesn't work, look into your definition of the DatacloudLibs library to ensure that it is properly pointing at the Snowflake JDBC driver, and if it still doesn't work, post the error message that you see in case it helps to determine the cause.

Protect CXF Service in Fuse with Basic Authentication on LDAP Users

I have a SOAP/REST service implemented in CXF inside Red Hat JBoss Fuse (in a Fabric).
I need to protect it with Basic Authentication, and credentials must be checked on a LDAP server.
Can this be done without a custom interceptor?
Can I maybe use the container JAAS security (configured with LDAP) to protect the service the same way I can protect the console?
Yes the container JAAS security realm can be used to protect a web service.
An example is here.
The example page doesn't explain the implementation, but a quick look at the blueprint.xml file reveals the following configuration:
<jaxrs:server id="customerService" address="/securecrm">
<jaxrs:serviceBeans>
<ref component-id="customerSvc"/>
</jaxrs:serviceBeans>
<jaxrs:providers>
<ref component-id="authenticationFilter"/>
</jaxrs:providers>
</jaxrs:server>
<bean id="authenticationFilter" class="org.apache.cxf.jaxrs.security.JAASAuthenticationFilter">
<!-- Name of the JAAS Context -->
<property name="contextName" value="karaf"/>
</bean>
So it's just a matter of configuring a JAAS authentication filter.
"karaf" is the default JAAS realm for the container: users are defined in etc/users.properties
To define more realms, info is here.
To have users on LDAP, see here.
The answer above is correct, but please note that for more recent versions of Fuse (past 6.1), the "rank" in the LDAP configuration must be greater than 100 in order to override the default karaf realm.
Also, with current patches applied, in Fuse 6.2.X, connection pooling for the LDAP connections can be enabled:
<!-- LDAP connection pooling -->
<!-- http://docs.oracle.com/javase/jndi/tutorial/ldap/connect/pool.html -->
<!-- http://docs.oracle.com/javase/jndi/tutorial/ldap/connect/config.html -->
context.com.sun.jndi.ldap.connect.pool=true
</jaas:module>
</jaas:config>
This is very important for high volume web-services. A connection pool is maintained to the LDAP server. This both avoids connection creation overhead and having closing sockets lingering in TIME-WAIT state.

Simple way to monitor Camel's localhost broker

as stated, is there a simple way to monitor Camel's vm incoming messages to the embedded broker ?
<bean id="jms" class="org.apache.activemq.ActiveMQConnectionFactory">
<property name="brokerURL" value="vm://localhost?broker.persistent=false" />
</bean>
Thanks!
Your question isn't quite clear on what you would like to monitor.
-You could log what you send to the broker and read from it.
-You could connect to jmx to view a bunch of processing information.
-If you are on the FUSE platform there is a management console that exposes the jmx endpoints to a web url.
-You could also setup JBoss operations network to poll all of your jmx info and do trending on the data.
If I didn't cover the use case you had in mind please update your question with some details and ping me in a comment so I can get back to you.

how to configure Apache Camel Quartz endpoint to use JDBCJobStore

I have configured Quartz endpoint for the scheduling requirement. However currently in my route configuration, trigger information is hard coded in the XML configuration file. As per the requirement, trigger information needs to come from DB.
<camel:route>
<camel:from uri="quartz://commandActions/MSFI?cron=15+17+13+?+*+MON-SUN+*" />
<camel:bean ref="userGateway" method="generateCommand" />
<camel:to uri="wmq:SU.SCHEDULER" />
</camel:route>
Quartz documentation says Jobs and triggers can be stored in database and are accessed using JDBCJobStore. Is it possible to configure Camel Quartz endpoint to use JDBCJobStore? I tried to find out an example but couldn't find. If someone has implemented this before, kindly share an example.
Thanks,
Vaibhav
Yeah see the quartz documentation how to configure it to use a jdbc job store. You can do this using a quartz.properties file, which you can tell Camel to use.
See the Camel side part here: http://camel.apache.org/quartz at the section Configuring quartz.properties file

Resources