Trying to understand how throttling work is two consumers lead to a direct consumer that does some work and then sends a transformed message onward.
I can specify throttle on each consumer, but if the intent is not to overwhelm the destination, can I apply throttle to the direct route?
More importantly, would it act just as if it is throttling the 2 consumers, or would it consume and potentially create a "build up" of messages between the initial routes and the direct route?
Maybe, instead of direct it has to be seda?
Follow-up question: Can the throttled messages be flushed out if a graceful-shutdown begins?
There is a nice example here corresponding to your problem - in this case, a JMS consumer + a file consumer, both sending to a same seda endpoint.
You will notice that a single throttling policy is defined, and that each consumer is referring to this policy, so that the final destination is not overwhelmed.
Hope this helps.
Related
I have a simple Camel route consuming messages from ActiveMQ, processing and forwarding them to Rest webservices:
from("activemq:MyQueue").process("MyProcessor").to("http4:uri");
I configure concurrentConsumers=100 in the connectionfactory from activemq-component.
In the documentation:
if asyncConsumer is disabled(default) then the Exchange is fully processed before the JmsConsumer will pickup the next message from the JMS queue
Question:
In my route, when is the exchange of each message is fully processed? After the http-callee receives http response? If that is the case, I assume, my route configuration means:
At beginning, 1 message is consumed from each consumers and forwarded to the http
Each of these 100 consumers is waiting and will only consume again if the current http call gets http response from the current message.
Another question:
I found out that the default value of http4 component option connectionsPerRoute=20. As I have 100 consumers, should I set connectionsPerRoute=100?
Thank you,
Hadi
each jms thread runs simultaneously without knowing each other. in your example 100 threads are processed at the same time without getting blocked.You do not need to play in the number of threads of the http component, as this is done through jms threads from start to finish.
I am new to Messaging and Integration. As I am trying to understand how Apache Camel makes EIP easy, Message Router caught my attention to a specific scenario where it consumes a message from one queue and transfers to some other queue based on a predicate.
Here are my questions:
Why use Message Router to perform this task?
Why not the message Producer directly send the message to the right Queue/Destination?
Why not the Consumer can rely on Message Selectors to do the same?
Can someone share your thoughts on real life use cases?
Well, your questions are more about SoC than messaging.
The Message Router is an EIP that can perform this task well and with Camel it is also very easy to implement, but as you write, there are other possibilities too.
If the message producer (client) sends messages directly to final destinations, it must know about all possible destinations. It must also be changed whenever a destination is added/removed or changed. This is tight coupling and in most situations not desirable.
If the message receiver(s) consume(s) with message selectors, the client can send all messages to the same queue. This is totally OK, you can implement individual consumers with different selectors. The selector feature of the Broker basically enables you to "divide a queue into multiple queues".
So if you want to implement and run an intermediate integration component between the client and the receivers, you can use the Message Router in this component.
If you want to omit this integration component, you can use message selectors to implement the receivers individual and de-coupled from each other.
I have a Camel based application which receives a request and gives the reply from cache but in between this process it updates the database which i want it to run in a different thread , can anyone tell me how can i achieve this, i tried with WireTap and SEDA but it does not work that way...any help appreciated.
<camel:wireTap uri="seda:tap" processorRef="updateHitCountProcessor"/>
In updateHitCountProcessor I have written code to update table
it is updating the database in same thread (i.e main route thread)
You need to do
<camel:wireTap uri="ref:updateHitCountProcessor"/>
The processorRef attribute is creating and sending a new message, and not for tapping the existing message. So you should not use that.
The uri is used for sending the message which happens in a separate thread. So when you send it to the ref endpoint it will do that in another thread, and call your processor.
You can find details on the wire tap page at: http://camel.apache.org/wire-tap
From the documentation of the camel-seda component (here):
By default, the SEDA endpoint uses a single consumer thread, but you
can configure it to use concurrent consumer threads.
You can add a thread pool to a SEDA endpoint like this:
<from uri="seda:stageName?concurrentConsumers=5" />
I am using Apache camel to implement dispatcher EIP. There are thousands of messages in a queue which needs to be delivered at different URLs. Each message has its own delivery URL and delivery protocol (ftp,email,http etc).
The way it is been implemented:
Boot a single camel context, the context is disabled for JMX and the
loadStatisticsEnabled is set to false on the ManagementStrategy. As
mentioned in a jira issue, addressed in 2.11.0 version, for disabling
the background management thread creation.
For each message a route is being constructed , the message is being
pushed to the route for delivery.
After the message is processed route is shutdown and removed from
context.
Did a small perf test by having 200 threads of dispatcher component, each sharing the same context.
Observed that the time to start a route increases upto a maximum of 60 seconds while the time to process is in milliseconds.
Issue CAMEL-5675 mentions that this has been fixed but still observing significant time being taken in starting up routes.
https://issues.apache.org/jira/browse/CAMEL-5675
The route that is being creating for http is
from("direct:"+dispatchItem.getID())
.toF("%s?httpClient.soTimeout=%s&disableStreamCache=true", dispatchItem.getEndPointURL(),timeOutInMillis);
Each dispatchItem has a unique ID.
This is being active discussed elsewhere, where the user posted this question first: http://camel.465427.n5.nabble.com/Slow-startup-of-routes-tp5732356.html
Does Camel provide anything out of the box which tells if it is able to connect all endpoints?
These endpoints could be MQ, webservice etc.
If not then I have to write a servlet which will send test request to all the endpoints. I will be using multicast or splitter for this implementation.
From my experience Camel will only provide warning logs if a from() endpoint is not available since it is constantly trying to read from them. Every other endpoint won't be accessed until the exchange tries to use that endpoint. If your goal is to test if various resources are alive I believe you would need to create your own testing program. I don't think this will be implemented as a feature because typically applications build in error handling if a resource is down and definte appropriate behaviors.
If we're talking about producers, then no. If your route is sending messages to an amq or http4 endpoint for instance, camel with not automatically send TCP-packets on these connections for monitoring purposes. A common way to handle failure of external endpoints is by using "circuit breakers". Take a look at https://camel.apache.org/load-balancer.html. A more robust alternative, imho, is Netflix's Hystrix.
If you have a polling consumer, say a from:ftp://.. then the polling consumer will poll messages every n-th millisecond, and you'll get an error if the connection is broken.