How to control camel threads per destination route (activemq topic)? - apache-camel

I'm using servicemix and camel to publish to some activemq topics:
from("vm:all").recipientList().simple(String.format("activemq:topic:%s.%s.%s.%s.%s.%s","${header.type}", ...
but a new producer thread gets created per topic, any idea how I can control that?
Edit1:
Actually I realized that I was creating too many topics and should you Selectors instead: http://www.andrejkoelewijn.com/blog/2011/02/21/camel-activemq-topic-route-with-jms-selector/
Thanks for the help!

The typical scenario is that Camel is using JMSTemplate to fire messages to ActiveMQ. That means it creates a new producer each time. Actually, it creates a new connection, session and producer per message. That is usually no issue except performance wise.
This is typically handled by org.apache.activemq.pool.PooledConnectionFactory which caches connections, sessions and producers for you. However, depending on your configuration, it might create "some" producers initially, but they will be reused.
Check your Connection Factory settings in activemq-broker.xml and make sure you grab the PooledConnectionFactory.

Related

How can I send a notification to an endpoint when a Throttler starts/stops throttling?

As the title says, I would like to send a notification to an endpoint if messages to it start getting throttled, and another message when the throttling stops.
I currently have the following (very basic) route configuration:
from("test-jms:queue:test.queue")
.throttle(2)
.to("file://test");
This configuration throttles messages just fine, but I need a way to let the consumer know that the messages are being throttled.
When the Throttler starts throttling, I would like to send a notification to the 'to' endpoint so those reading the messages know that they are being throttled. I would also like to be able to send another message when the Throttler is no longer throttling, so the consumer knows the messages are up to date.
This doesn't appear to be something the Throttler does. The only way I see of getting a notification when it starts throttling is setting rejectExecution to true, at which point it will throw an exception. The problem is that execution stops at that point, and no more messages are passed through (since an exception was thrown).
My current thoughts are that I will need to create a custom bean/processor/something that performs essentially the same function as the Throttler, but also injects a message when the throttling starts or stops. I don't want to do that unless I really need to, though. Any help is appreciated. Thanks!
No the throttler eip does not support such information (however you may be able to grab some statistics via JMX). A different thought would be to reverse the direction so the consumers signals upstream when they want new messages (this is what reactive systems does).
I assume the above to write to a file is just some example, what consumers are you using in real life, and do they really need to know that some messages are backed up within a short-time period of 1 second because they are throttled? Also since your source is JMS, you can also look at the route throttling policy, where you can suspend/resume the JMS consumer instead of using the throttler EIP.

RabbitMQ Memory Usage

I'm currently using the RabbitMQ (3.6.2-1) on Ubuntu(16.04) in production. Producers publish messages and consumers consume messages and everything works correctly but sometimes RabbitMQ doesn't release memory and it touchs max memory and producers can not publish message into empty queues so I have to restart the service.
It's a bug or something else?
Update :
It is from the management plugin, so you can solve this issue by one of these solutions :
1.Update your RabbitMQ version (3.6.15 is stable)
2.Restarting statistics database periodically (in crontab hourly) https://www.rabbitmq.com/management.html#stats-db
3.set rates_mode to none in your rabbitmq.config file (it's not a good idea because in this case you can not see message rates)
You should check the number of the messages you have inside you queues.
RabbitMQ by default keeps the messages in memory to be fast.
if you have to handle lot of messages you can use the lazy queue:
https://www.rabbitmq.com/lazy-queues.html
With lazy queue you can handle milions of messages wihtout impact too much the node memory.
Or it could be a mamagement memory problem, see:
http://rabbitmq.com/management.html#stats-db
in your canse you can run:
rabbitmqctl eval 'supervisor2:terminate_child(rabbit_mgmt_sup_sup, rabbit_mgmt_sup), rabbit_mgmt_sup_sup:start_child().'
to reset the stats and free the memory.
You could call it periodically
Note:
There are different ways to reset the stats, it depends from the rabbitmq version, here all the detail

Read only no of messages from Queue using camel?

I am able to read the messages from activemq using camel context[xml], but i could like to read only no of the messages, for example if queue contains 10 000 messages, we want to read only first 1 000 messages, remaining shouldn't be touched.
I am new to the camel
It is not quite clear how you want your program to work. Do you want to stop the route after 1000 messages? Or your program? Or just finish them before processing the rest?
Anyway, the Jms component has a maxMessagesPerTask parameter that is the number of messages a task can receive after which it's terminated. That might do what you want.
"jms:queue:order?maxMessagesPerTask=1000"
What if there is only 500 messages in the queue, should you wait until you have received additional 500, so the total is 1000. And what if you restart your application etc.
It's a bit strange use-case. The Camel JMS component is designed to continuesly consume from the queue. If you want to stop then look at the Control Bus EIP where you can control Camel routes, and stop them. And also look at the RoutePolicy where you can control routes using that, for example look at the throttling route policy which can start/stop routes depending on load etc.
The CiA2 book also has coverage of managing and controlling Camel routes, you can look at in the management chapter.
http://camel.apache.org/controlbus.html
http://camel.apache.org/routepolicy.html

Camel SFTP fetch on schedule and on demand

I can see similar problems in different variations but haven't managed to find a definite answer.
Here is the usecase:
SFTP server that I want to poll from every hour
on top of that, I want to expose a REST endpoint that the user can hit do force an ad-hoc retrieval from that same SFTP. I'm happy with the schedule on the polling to remain as-is, i.e. if I polled, 20 mins later the user forces refresh, the next poll can be 40 mins later.
Both these should be idempotent in that a file that was downloaded using the polling mechanism should not be downloaded again in ad-hoc pull and vice-versa. Both ways of accessing should download ALL the files available that were not yet downloaded (there will likely be more than one new file - I saw a similar question here for on-demand fetch but it was for a single file).
I would like to avoid hammering the SFTP via pollEnrich - my understanding is that each pollEnrich would request a fresh list of files from SFTP, so doing pollEnrich in a loop until all files are retrieved would be calling the SFTP multiple times.
I was thinking of creating a route that will start/stop a separate route for the ad-hoc fetch, but I'm not sure that this would allow for the idempotent behaviour between routes to be maintained.
So, smart Camel brains out there, what is the most elegant way of fulfilling such requirements?
Not a smart camel brain, but I would give a try as per my understanding.
Hope, you already went through:
http://camel.apache.org/file2.html
http://camel.apache.org/ftp2.html
I would have created a filter, separate routes for consumer and producer.
And for file options, I would have used: idempotent, delay, initialDelay, useFixedDelay=true, maxMessagesPerPoll=1, eagerMaxMessagesPerPoll as true, readLock=idempotent, idempotent=true, idempotentKey=${file:onlyname}, idempotentRepository, recursive=false
- For consuming.
No files will be read again! You can use a diversity of options as documented and try which suits you the best, like delay option. If yo
"I would like to avoid hammering the SFTP via pollEnrich - my understanding is that each pollEnrich would request a fresh list of files from SFTP, so doing pollEnrich in a loop until all files are retrieved would be calling the SFTP multiple times." - > Unless you use the option disconnect=true, the connection will not be terminated and you can either consume or produce files continously, check ftp options for disconnect and disconnectOnBatchComplete.
Hope this helps!

Enabling Replay mechanism with camel from messages from DB

Iam trying to implement replay mechanisam with camel ie., i will have to retrieve all the messages already persisted and forward to appropriate camel route to reprocess.This will be triggred by quartz scheduler.
I achieved the same by using below.
1) once the quartz scheduler is triggered, fwd to processor which will query db and have the message as list and set the same in camel exchange properties as list.
2) Use the camel in which LoopProcessor will set appropriate xml in the iteration in the exchange.
3) fwd it to activemq which will be forwarded to appropriate camel route to reprocess.
Every thing is fine.
I see the following TWO issues
a) there might be 'n' number of msges(10,000+) which will be available in the camel exchange properties in the form of List - i can remove the message which is sent for processing but i think this will do more good on performance and memory usage.
b) I don want to forward all the 10,000+ messages to activemq which i guess will make it exhaustive. Is there a better mechanism to forward 10000+ messages to activemq.
-- I am thinking to use SEDA/VM(using different camel contexts).how good this can give me considering above questions.
Thanks.
Regards
Senthil Kumar Sekar
If the number of messages is a problem, then not all messages should be loaded at once.
Process as follows (see also my answer for your other SO):
Limit the number of results when querying the DB.
Set a marker (e.g. processedFlag) for the DB entries that are processed
Begin at 1. and query only the not already processed entries until all records are processed.
However, you should test the ActiveMQ approach as well, if 10,000+ messages are really a problem or not.

Resources