I have Thread pool profile like below:
<threadPoolProfile id="myThrottler"
poolSize="5"
maxPoolSize="20"
maxQueueSize="1000"
rejectedPolicy="CallerRuns"/>
I'm using this thread pool in route:
<route>
<from uri="stream:in"/>
<throttle timePeriodMillis="2000" asyncDelayed="true" executorServiceRef="myThrottler">
<constant>5</constant>
</throttle>
<log message="${threadName}"/>
</route>
I can get thread name by <log message="${threadName}"/>
Can I get the queue size using in thread pool?
You cannot get the thread pool queue size from the <log> which uses the Simple language.
But you can use JMX to get metrics from the thread pool, such as the queue size etc.
http://camel.apache.org/camel-jmx.html
Related
My application consists of many OSGi bundles running inside JBoss Fuse 6.2.1. Each bundle has a Camel route consuming from an ActiveMQ endpoint. Data is exchanged using VirtualTopics.
ProducerBundle publishes to topic VirtualTopic.MyTopic
ConsumerBundle A consumes from queue Consumer.A.VirtualTopic.MyTopic
ConsumerBundle B consumes from queue Consumer.B.VirtualTopic.MyTopic
ConsumerBundle C consumes from queue Consumer.C.VirtualTopic.MyTopic
At a certain moment in time Consumer C is closed, its bundle uninstalled and will never come back. Howewer, messages are still enqueued into Consumer.C.VirtualTopic.MyTopic queue.
How do I destroy such queue?
ActiveMQ pauses the Producer when the queue fills up, and I cannot set a small time to live on the message as other consumers may take a while to process each message. I cannot modify the VirtualTopic structure. I have full access to
ActiveMQ configuration and Camel routes.
Are there any other options to handle the situation?
<!-- producer route -->
<route id="ProducerRoute"/>
<from uri="direct:trigger"/>
<to uri="activemq:topic:VirtualTopic.MyTopic"/>
</route>
<!-- each consumer route -->
<route id="ConsumerARoute">
<from uri="activemq:Consumer.A.VirtualTopic.MyTopic"/>
<to uri="bean:myProcessor"/>
</route>
Look at the Apache ActiveMQ documentation how you can delete inactive queues/topics such as: http://activemq.apache.org/delete-inactive-destinations.html
I went for the aggressive solution: I hook into OSGi bundle lifecycle, when it is stopped I use JMX MBeanServer to destroy the now unneeded queues.
Since my bundle is managed using blueprint, I opted for a bean with a destroy method.
Here's an example implementation:
My bean
package org.darugna.osgi;
import javax.management.MBeanServer;
import javax.management.ObjectName;
public class QueueDestroyer {
private static final String[] QUEUES_TO_DESTROY = {
"Consumer.A.VirtualTopic.MyTopic"
};
private MBeanServer mBeanServer;
public void setMbeanServer(MBeanServer mBeanServer) {
this.mBeanServer = mBeanServer;
}
public void destroy() throws Exception {
ObjectName brokerName = new ObjectName("org.apache.activemq:type=Broker,brokerName=amq");
for (String queueName : QUEUES_TO_DESTROY) {
Object returnValue = mBeanServer.invoke(brokerName,
"removeQueue",
new Object[]{queueName},
new String[]{String.class.getName()});
}
}
}
Blueprint.xml
<blueprint>
<reference id="mbeanServer" interface="javax.management.MBeanServer"
availability="mandatory"/>
<bean id="queueDestroyer" class="org.darugna.osgi.QueueDestroyer"
destroy-method="destroy">
<property name="mbeanServer" ref="mbeanServer"/>
</bean>
<camelContext>
<route>
<from uri="activemq:Consumer.A.VirtualTopic.MyTopic"/>
<to uri="bean:myProcessor"/>
</route>
<camelContext>
</blueprint>
I had a similar situation, but I couldn't use the Claus's suggestion because in my broker there are other queues that have no consumer and I doesn't want to delete them.
In my case I'm running a JBoss Fuse 6.1.0 with fabric (I think that is the same with newer version of Fuse): I just removed the consumer (in my case I just removed the profile with the consumer) and after that I deleted the queue with the delete button in the hawtio console.
I have a problem with camel in ServiceMix.
I made webservice through camel-jetty, camel-recipentlist in servicemix.
This package is good performance but resource lock and thread full occurred it. This system process 40 Call per second.
The problem is that pool threads aren't released properly sometimes. After
a few hours following the start of an application I can see using jstack
tool that some threads are stuck in a WAITING state:
configuration is as follows:
- servicemix 5.3.0
- camel 2.13.2
- using component (camel-jetty , camel-recipentlist based Spring DSL)
-SOURCE
<route customId="true" >
<from uri="direct:giop_addr_async">
<recipentList>
<simple>jetty://http://api.host.lm?x=${header.x}&y=${header.y}</simple>
</recipentList>
<bean ref="soapDecode" method="userDecode"/>
<to uri="direct:sendEndPoint">
</route>
<route customId="true>
<from uri="direct:sendEndPoint">
<to uri="jetty://http://resultMap?httpClient.soTimeout=80000"/>
</route>
-------------- LOG
ps -eLf | wc -l --> 32500
"CamelJettyClient(0x3d0b240d)-26916" damen prio=10 tid=0x000000000ff69800 nid =0x10ef wating on condition [0x00002b4b3ba3f0000]
java.lang.Thread.State: TIMED_WAITNG(parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x000000006f13f19b0> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LocsSupport.parkNanos(LockSupport,java:226)
at org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:342)
at org.eclipse.jetty.util.thread.QueuedThreadPool.idleJobPoss(QueuedThreadPool.java:526)
at org.eclipse.jetty.tuil.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572)
at java.lnag.thread.run(Thread.java:745)
The above log is more than 30000 lines.
Can you suggest what else can be checked? Am I missing something? Or may be
this is a bug in Camel?
Having threads in a waiting state is not abnormal, but 30.000 threads is a lot.
You are using a recipientList : For each combination of x and y, you are creating a new jetty producer. each jetty producer has his own thread pool, and camel keeps in a cache the last producers created: A large number of thread pools and thread are created here.
You don't need a recipientList in order to have a URL built dynamically. You can use the header Exchange.HTTP_URI :
for example:
<route customId="true" >
<from uri="direct:giop_addr_async">
<setHeader headerName="CamelHttpUri">
<simple>jetty://http://api.host.lm?x=${header.x}&y=${header.y}</simple>
</setHeader>
<to uri="jetty://http://api.host.lm"/>
<bean ref="soapDecode" method="userDecode"/>
<to uri="direct:sendEndPoint">
</route>
<route customId="true>
<from uri="direct:sendEndPoint">
<to uri="jetty://http://resultMap?httpClient.soTimeout=80000"/>
</route>
We have a camel route that looks at a file and processes potentially hundreds of records on this file, almost like a batch routine (yet there will only be one message in camel). Thus the message will take potentially minutes or maybe hours to complete. We want to shut down the queue once this message (and any others waiting) are complete.
We have the following to consider:
The shutdown strategy defines the time to wait for a route to stop before a forced shutdown
<bean id="shutdown" class="org.apache.camel.impl.DefaultShutdownStrategy">
<property name="timeout" value="#[bpf.defaultShutdownStrategy.timeout]"/>
</bean>
The route has a parameter shutdownRunningTask="CompleteAllTasks" which should wait untill all messages processed.
Not sure which is going to take presidence as the timeout once exceeded is not graceful, it will force shutdown and for our scneario it is likely we will exceed a timeout, as we cannot predict how long processing will take.
Any ideas/considerations?
Thanks in advance.
You should look at the onCompletion functionality. It adds a new route in a separated thread when the Exchange is complete.
Here is some examples from the Camel documentation:
Java DSL
// define a global on completion that is invoked when the exchange is complete
onCompletion().to("log:global").to("mock:sync");
from("direct:start")
.process(new MyProcessor())
.to("mock:result");
XML DSL
<!-- this is a global onCompletion route that is invoke when any exchange is complete
as a kind of after callback -->
<onCompletion>
<to uri="log:global"/>
<to uri="mock:sync"/>
</onCompletion>
<route>
<from uri="direct:start"/>
<process ref="myProcessor"/>
<to uri="mock:result"/>
</route>
Then, here is documentation on how to stop a route in Camel.
I'm basically new to camel. I set up a camel context with two routes that are using seda endpoints.
Simplyfying, all starts with a "from" file endpoint (sorry for the terminology if wrong) listening on a directory:
<route>
<from uri="file:mydir"/>
<process ref="a bean that change the body of the message by setting a custom object"/>
<to uri="seda:incoming"/>
</route>
<route>
<from uri="seda:incoming"/>
<process ref="a bean that does something with the custom object above"/>
....
</route>
now, what described above works perfectly but i need to change seda with activemq queues and after doing that the body of the message received by the 2nd processor is empty.
How can I obtain the same behaviour of seda endpoints using activemq channels?
exchange.getIn().setBody(myCustomBean)
and
exchange.getIn().setHeader("inputfile", aFileInstance)
If you expect to get some result when aquiring from activemq queue, you should send serializable object to queue. Otherwise object will not be transferred.
At your case there's no guarantee that myCustomBean and aFileInstance are serializable.
Basically try sending the Strings into the queue. Or make your objects serializable.
I would like to know if it's possible with Camel to do throttling based on the content of the exchange.
The situation is the following: I have to call a webservice via soap. Among, the parameters sent to that webservice there is a customerId. The problem is that the webservice send back an error if there are more than 1 request per minute for a given customerId.
I'm wondering if it would be possible to implement throttling per customerId with Camel. So the throttling should not be implemented for all messages but only for messages with the same customerId.
Let me know how I could implement this or if I need to clarify my question.
ActiveMQ Message Groups is designed to handle this case. So, if you can introduce a JMS queue hop in your route, then just set the JMSXGroupId header to the customerId. Then in another route, you can consume from this queue and send to your web service to get the behavior you described.
also see http://camel.apache.org/parallel-processing-and-ordering.html for more information...
While ActiveMQ Message Groups would definitely address the parallel processing of unique customer ID's, in my assessment Claus is correct that introducing a throttle for each unique group represents an unimplemented feature for Camel/ActiveMQ.
Message Groups alone will not meet the SLA described. While each group of messages (correlated by the customer ID) will be processed in order with one thread per group, as long as requests take less than a minute to receive a response, the requirement of one request per minute per customer would not be enforced.
That said, I would be very interested to know if it would be possible to combine Message Groups and a throttle strategy in a way that would simulate the feature request in JIRA. My attempts so far have failed. I was thinking something along these lines:
<route>
<from uri="activemq:pending?maxConcurrentConsumers=10"/>
<throttle timePeriodMillis="60000">
<constant>1</constant>
<to uri="mock:endpoint"/>
</throttle>
</route>
However, the throttle seems to be applied to the entire set of requests moving to the endpoint, and not to each individual consumer. I have to admit, I was a bit surprised to find that behavior. My expectation was that the throttle would apply to each consumer individually, which would satisfy the SLA in the original question, provided that the messages include the customer ID in the JMSXGroupId header.
I came across a similar problem and finally came up with the solution described here.
My assumptions are:
Order of messages is not important (though it can be solved by re-sequencer)
Total volume of messages per customer ID is not great so the runtime is not saturated.
The solution approach:
Run aggregator for 1 minute while using customerID to assemble messages with the same customer ID into a list
Use Splitter to split the list into individual messages
Send the first message from the splitter to the actual service
Re-route the rest of the list back into the aggregator.
Java DSL version is a bit easier to understand:
final AggregationStrategy aggregationStrategy = AggregationStrategies.flexible(Object.class)
.accumulateInCollection(ArrayList.class);
from("direct:start")
.log("Receiving ${body}")
.aggregate(header("customerID"), aggregationStrategy).completionTimeout(60000)
.log("Aggregate: releasing ${body}")
.split(body())
.choice()
.when(header(Exchange.SPLIT_INDEX).isEqualTo(0))
.log("*** Processing: ${body}")
.to("mock:result")
.otherwise()
.to("seda:delay")
.endChoice();
from("seda:delay")
.delay(0)
.to("direct:start");
Spring XML version looks like the following:
<!-- this is our aggregation strategy defined as a spring bean -->
<!-- see http://stackoverflow.com/questions/27404726/how-does-one-set-the-pick-expression-for-apache-camels-flexibleaggregationstr -->
<bean id="_flexible0" class="org.apache.camel.util.toolbox.FlexibleAggregationStrategy"/>
<bean id="_flexible2" factory-bean="_flexible0" factory-method="accumulateInCollection">
<constructor-arg value="java.util.ArrayList" />
</bean>
<camelContext xmlns="http://camel.apache.org/schema/spring">
<route>
<from uri="direct:start"/>
<log message="Receiving ${body}"/>
<aggregate strategyRef="_flexible2" completionTimeout="60000" >
<correlationExpression>
<xpath>/order/#customerID</xpath>
</correlationExpression>
<log message="Aggregate: releasing ${body}"/>
<split>
<simple>${body}</simple>
<choice>
<when>
<simple>${header.CamelSplitIndex} == 0</simple>
<log message="*** Processing: ${body}"/>
<to uri="mock:result"/>
</when>
<otherwise>
<log message="--- Delaying: ${body}"/>
<to uri="seda:delay" />
</otherwise>
</choice>
</split>
</aggregate>
</route>
<route>
<from uri="seda:delay"/>
<to uri="direct:start"/>
</route>
</camelContext>