I have a requirement where I need to throttle by shaping (queuing) inbound traffic when client app sends more than 1000 requests in a 5 sec time span.
The solution I followed is:
I have a camel:throttle setting max requests to 1000 and timespan to 5 sec. When threshold is exceeded I am catching throttle exception and within the onException block, I am sending the throttled messages to an ActiveMQ request queue for further processing later as Camel is overloaded based on 1000 req/ 5 sec config.
I am successful in implementing the above, however I would like to have Camel consumer to further process later not all messages from ActiveMQ request queue at one shot instead process each message with a delay of 10 sec for e.g.
I am not able to set a parameter in ActiveMQ to say delay the message to consumer nor delay Camel consumer pulling off the message from request queue.
How do I cater to my above requirement
Please help
Thanks
Ramesh.
In another SO thread the winning answers promotes the following solution:
from("activemq:queueA").throttle(10).to("activemq:queueB")
To me this solution only makes sense, if you define a prefetch limit, without which the consumer would not care about any downstream throttling. This route should work:
from("activemq:queueA?&destination.consumer.prefetchSize=10").throttle(10).to("activemq:queueB")
This is the threory behind it, right from http://activemq.apache.org/what-is-the-prefetch-limit-for.html
So ActiveMQ uses a prefetch limit on how many messages can be streamed to a consumer at any point in time. Once the prefetch limit is reached, no more messages are dispatched to the consumer until the consumer starts sending back acknowledgements of messages (to indicate that the message has been processed). The actual prefetch limit value can be specified on a per consumer basis.
You can enable scheduled delivery of ActiveMQ and then set in your Camel Route AMQ_SCHEDULED_DELAY header and then send your exchange to a queue. This will result to introducing a delay of AMQ_SCHEDULED_DELAY millis before the message appears in the queue (i.e. be available for consumption).
Check this: http://activemq.apache.org/delay-and-schedule-message-delivery.html
Related
I have a simple Camel route consuming messages from ActiveMQ, processing and forwarding them to Rest webservices:
from("activemq:MyQueue").process("MyProcessor").to("http4:uri");
I configure concurrentConsumers=100 in the connectionfactory from activemq-component.
In the documentation:
if asyncConsumer is disabled(default) then the Exchange is fully processed before the JmsConsumer will pickup the next message from the JMS queue
Question:
In my route, when is the exchange of each message is fully processed? After the http-callee receives http response? If that is the case, I assume, my route configuration means:
At beginning, 1 message is consumed from each consumers and forwarded to the http
Each of these 100 consumers is waiting and will only consume again if the current http call gets http response from the current message.
Another question:
I found out that the default value of http4 component option connectionsPerRoute=20. As I have 100 consumers, should I set connectionsPerRoute=100?
Thank you,
Hadi
each jms thread runs simultaneously without knowing each other. in your example 100 threads are processed at the same time without getting blocked.You do not need to play in the number of threads of the http component, as this is done through jms threads from start to finish.
Is there a way using which we can stop the Camel route if we don't have any more message in ActiveMQ? My required scenario.
Fetch all the message from ActiveMQ queue and process them.
Poll for 2-3 more times to check if we have any other new message, is yes execute step #1.
If no message in present stop the route and start it back after say 5 min (which I guess can be achieved by Polling Strategy).
Have a look at this answer. It polls the queue with a scheduler and a polling strategy (POJO).
With the scheduler you can choose the interval of polling
With the timeout of the polling strategy consumer you can stop consuming (e.g. if no message arrives for 5 seconds the queue is probably empty)
If you want to stop/start the consumer completely, you can add Camel Control Bus to the mix. You could then start and stop the consumer route.
I am trying to use camel throttle to delay message processing if my cxf endpoint is down, need suggestion as How to disable throttling once cxf is up and running back.
If the cxf endpoint is down, throttling will still lead to errors-- just slower. I suggest 'stopping' the route instead, or breaking your route into two routes.. one to store messages in a queue and the other to read and send to the cxf endpoint.
The message queue acts like a natural buffer for traffic spikes, throttle and holding pen if the remote endpoint is down.
If you really must stick with the existing design using a throttle, then extract the throttle values in a config. During "down" periods set the throttle count to really high values. During "up" periods set the throttle time to 0ms and the throttle count to a really high value-- the throttle will effectively be disabled.
Please help me in finalizing architecture for Asynchronous delivery support using Apache Camel and ActiveMQ and below I have explained point by point basis about my requirement.
I have Jetty Server receiving incoming messages and ActiveMQ to store it in disk using Kaha DB.
Active MQ sends ack once it stores in kaha DB back to client.
I have Spring AbstractPollingMessageListenerContainer JMS Message listener which picks up the message from activemq queue every 1 second and dispatch to Camel HTTP endpoints and then finally sent to actual remote receivers.
Once Dispatcher thread gets response from remote receivers it deletes message from ActiveMQ.
Assume that I have many slow remote receivers in that case my Dispatcher thread created by AbstractPollingMessageListenerContainer remains blocked until I get response from remote receivers. This results to creation of new Dispatcher threads since already created Dispatcher threads are not able to dispatch new messages from ActiveMQ queue.
Now creation of many Dispatcher threads result into more CPU usage which impacts overall performance.
Now my requirement is I want Dispatcher thread only to dispatch messages from ActiveMQ queue to HTTP endpoint and forget and also not do acknowledgement so that message is still in queue.
Also I will not let Dispatcher thread to wait till I get response so I have thought to handle response using separate thread and this same thread will only delete message from ActiveMQ queue.
So my current architecture is like below:
Camel Jetty Server ----> ActiveMQ queue ----> Dispatcher Thread ---> Camel Direct endpoint ----> Camel HTTP endpoint ---> remote receivers sending response back ---> response ---> Dispatcher Thread (sends ack to delete messages from ActiveMQ queue) ----> ActiveMQ Queue.
Here I feel since we are using Direct endpoint which is synchronous so Dispatcher thread remains active till it gets response and so same dispatcher thread is not able to process further new message from ActiveMQ queue.
Please suggest if some thing else I can use here to avoid Direct endpoint.
I used SEDA endpoint but drawback is it processes 1 message using 1 thread and also gets blocked till it gets response from receivers.
In this approach previously Dispatcher thread gets blocked but now Seda consumer threads gets blocked and could not dispatch new messages from in memory queue of SEDA towards remote receivers.
I feel some kind of design which helps me in keep on sending message to remote receivers and only when response comes back some daemeon thread gets notified and it will handle acknowledgement towards activeMQ. Also I thought to use NIO framework implementation like Camel netty/netty4-http component but could not find exact usage and how to fit it in current architecture.
Modified architecture should be like below:
Camel Jetty Server ----> ActiveMQ queue ----> Dispatcher Thread--->Unknown Stuff ----> Camel HTTP Endpoint ---> remote receivers sending response back--->Unknown Stuff (sends ack to delete messages from ActiveMQ queue) ----> ActiveMQ Queue
Please help me in finalising Unknown Stuff and I am posting my query after doing enough R & D.
Also new ideas are welcome and please give me idea with a restriction that I must persist the message and delete it only after getting success response from remote receiver. Also I have to design architecture only using Apache Camel routes.
Route Definitions:
1. Dispatcher Route:
from(fromUri)to(toUris);
fromUri:
[ActiveMQueue.http1270018081testEndpoint1:queue:ActiveMQueue?maxConcurrentConsumers=15&concurrentConsumers=3&maxMessagesPerTask=10&messageListenerContainerFactoryRef=AbstractPollingMessageListenerContainer ]
ToUris:
[ActiveMQ.DLQ:queue:ActiveMQ.DLQ, direct:http1270018081testEndpoint1]
2.Remote Receiver Proxy Route:
fromUri:direct:http1270018081testEndpoint1
from(fromUri).to(toUri).process(responseProcessor)
toUri:http://127.0.0.1:8081/testEndpoint1?bridgeEndpoint=true
responseProcessor: To process response received by remote receiver.
Overall Route looks like below:
Dispatcher Route---> Remote Receiver Route---> Remote Server
JMS message acknowledgement is done under the covers, so your only way to really "send the acknowledgement back to the queue" is to use a JMS transaction (doesn't need to be XA)
It sounds like a LLR-style transaction would be useful and drastically simplify things for you. If you consume the message from the queue using a JMS-local transaction, and only have one other endpoint, the message will only be acknowledged and removed from the queue when the http send is completed-- even though HTTP doesn't support transactions. You can then have a number of concurrent consumers to run in parallel and combine with throttling to help with rate limiting.
from: amq:queue:INPUT.REQUESTS?.. concurrentConsumers.. and transacted enabled
throttle
to: http://url
I am using Apache camel to implement dispatcher EIP. There are thousands of messages in a queue which needs to be delivered at different URLs. Each message has its own delivery URL and delivery protocol (ftp,email,http etc).
The way it is been implemented:
Boot a single camel context, the context is disabled for JMX and the
loadStatisticsEnabled is set to false on the ManagementStrategy. As
mentioned in a jira issue, addressed in 2.11.0 version, for disabling
the background management thread creation.
For each message a route is being constructed , the message is being
pushed to the route for delivery.
After the message is processed route is shutdown and removed from
context.
Did a small perf test by having 200 threads of dispatcher component, each sharing the same context.
Observed that the time to start a route increases upto a maximum of 60 seconds while the time to process is in milliseconds.
Issue CAMEL-5675 mentions that this has been fixed but still observing significant time being taken in starting up routes.
https://issues.apache.org/jira/browse/CAMEL-5675
The route that is being creating for http is
from("direct:"+dispatchItem.getID())
.toF("%s?httpClient.soTimeout=%s&disableStreamCache=true", dispatchItem.getEndPointURL(),timeOutInMillis);
Each dispatchItem has a unique ID.
This is being active discussed elsewhere, where the user posted this question first: http://camel.465427.n5.nabble.com/Slow-startup-of-routes-tp5732356.html