I want the onCompletion to occur after all the aggregated exchanges triggered by both completion size followed by timeout are processed. But it occurs right after the completion size is triggered with some of the exchanges waiting to be triggered by the timeout criteria.
I have the route configured as
from(fromEndPoint)
.onCompletion()
.doSomething()
.split() // each Line
.streaming()
.parallelProcessing()
.unmarshal().bindy
.aggregate()
.completionSize(100)
.completionTimeout(5000)
.to(toEndpoint)
Assume if the split was done on 405 lines, the first 4 sets of aggregated exchanges go to the to endpoint completing 400 lines(exchanges) . And then, it immediately triggers the onCompletion. But there are still 5 more aggregated exchanges which would be triggered when the completionTimeout criteria is met. It didn't trigger the onCompletion after the 5 exchanges are routed to the to endpoint.
My question here is , either the onCompletion should be triggered for each exchange or once after all.
Note:- My from endpoint here is a File.
Related
I have requirement to poll two ftp folders say "output" and "error" for file. Actual file could come in either of two folders(not both). I tried multicast (even used completionSize of 1) but process keeps waiting as camel waits for both output and error endpoint routes to complete.
from("direct:got_filename_to_poll")
.multicast()
.parallelProcessing(true)
.to("ftp:output", "ftp:error")
.end()
.to("direct:process_extracted_file")
Is there any way to interrupt "ftp:error" sub-route if I get response from "ftp:output" sub-route(or vice versa) or is there any other option to solve this problem without compromising response time, example adding timeout on slow response will slow down overall response time.
I am trying to read message from activemq queue "AMQ:ORIGIN" using apache camel. After reading the message need to pass it to two different "AMQ queue's". But condition is following.
Message should pass to queue "AMQ:A" Immediately.
Message should pass to queue "AMQ:B" After one minute delay.
To Achieve above i created two routes. In first route i am reading from AMQ queue, and doing multicast to "AMQ:A" and "seda:delay" queue. In second route, i am reading from "seda:delay" queue, delaying for one minute and then passing to "AMQ:B" queue.
Work fine if pass with 1 or 10 messages to "AMQ:ORIGIN"
If i send 100 messages same time to "AMQ:ORIGIN" queue, then
All 100 messages are delivered to "AMQ:A" queue
Only 10 or 12 messages are delivered to "AMQ:B" queue. Rest is stuck in route only.
Following are my routes.
<route id="read-origin">
<from uri="activemq:ORIGIN"/>
<multicast stopOnException="true">
<to uri="activemq:A"/>
<to uri="seda:delay-route"/>
</multicast>
</route>
<route id="delay-route">
<from uri="seda:delay-route"/>
<delay asyncDelayed="true">
<constant>60000</constant>
</delay>
<to uri="activemq:B"/>
</route>
Please suggest the changes to achieve above.
Thanks,
That seems to be obvious since you delay every message for 1 minute.
If you send 100 messages to the ORIGIN queue, it takes 100 minutes until all these messages arrive at queue B.
The first message is consumed immediately, and delayed for 1 minute. The second is taken when the first is delivered (assuming 1 consumer on seda queue) and also delayed for one minute and so on...
I assume you want that a message that already waited for 1 minute in the queue, can be delivered immediately when consumed.
You can easily reach this making the delay dynamic.
Implement a bean that calculates the difference between the JMSTimestamp header (enqueue time) of a message and the current time.
currentTime - JMSTimestamp = alreadyWaited
Your minimal delay - alreadyWaited = time to wait before delivery (take 0 for negative values of messages that where queued for more than the delay)
Use this difference as value for the delay (I use Java DSL because I know it better).
from("seda:delay-route").routeId("delay-route")
.delay().expression(method(YourDelayCalculationBean.class))
.to("activemq:B");
Like this, if your messages pile up in the queue, they are probably all delivered immediately on consumption because they already waited in the queue for more than 1 minute.
Addition due to comment
OK, sorry, I did not spot the asyncDelayed.
What the docs say about asyncDelayed sounds like what you expect. But according to your comment it sounds like the Delay EIP is no more blocking the consumer, but blocks itself.
So the seda consumer takes a message, hands it over to the Delay and continues to the next message. After 10 messages (10 threads is the default thread pool size of Camel), the Delay is "full" (all threads are blocked in a fixed 1 minute delay).
Therefore the consumer becomes blocked because the Delay can take no more messages. After a minute, the Delay can deliver the first messages and it continues.
That is just wild guessing based on what you write how your route behaves.
I have a EIP design related query.I have a requirement to process csv file by chunks and call a Rest API.After completion of processing of whole file i need to call another Rest API telling processing is complete.I wanted the route to be transacted so i have queue in between in case of end system not available the retry will happen at broker level.
My flow is as below.
First flow:
csv File->Split by chunk of 100 records->Place message in queue
the second flow(Transacted route):
Picks message from queue ->call the rest API
the second flow is transacted.Since iam breaking the flow and it is asynchronous iam not sure how to call to the completion call.I do not have a persistent store to status of each chunk processing.
is there anyway i can achive it using JMS functionality or Camel?
What you can use for your first flow is the Camel Splitter EIP:
http://camel.apache.org/splitter.html
And closely looking at the doc, you will find that there are three exchange properties available for each split exchange:
CamelSplitIndex: A split counter that increases for each Exchange being split. The counter starts from 0.
CamelSplitSize: The total number of Exchanges that was splitted. This header is not applied for stream based splitting. From Camel 2.9 onwards this header is also set in stream based splitting, but only on the completed Exchange.
CamelSplitComplete: Whether or not this Exchange is the last.
As they are exchange properties, you should put them to JMS headers before sending the messages to a queue. But then you should be able to make use of the information at the second flow, so you can know which is the last message.
Keep in mind, though, that it's all asynchronous so the CamelSplitComplete flag doesn't necessarily mean the last message at the second flow. You may create a stateful counter or utilise the Resequencer EIP http://camel.apache.org/resequencer.html to deal with the asynchronicity.
Route1:
1. Send Message M1 to MQ Q1. // This message goes to a program that consumes from Q1 and once some processing is done, write message M2 to Q2.<br/>
2. Upon receiving Message M2 on Q2, //Q2 receives several messages (M2,N2,P2, etc.). ONLY when M2 is received, the Route1 should continue.<br/>
3. Send message M3 to Q3. // The step should be executed only after step 2 is complete<br/>
Step 2 implies that the message coming on Q2 needs to be inspected and only if it turns out to be M2, the Route1 should resume. Is there something we can do to pause a Route while it waits for a message and resume it once the message arrives?
Can we raise an event so that any Route that is waiting for M2 can be resumed? We know for sure that among all the parallely executing routes, only one may wait for M2. What do we need to do to resume the route?
Thanks,
Yash
there are a couple of options for controlling routes at runtime
use a RoutePolicy to add route lifecycle callbacks
use content based routing to apply specific logic based on message attributes/content
use the aggregator to group messages together based on various rules, etc.
use the ControlBus component to control routes status
also see this page for how to stop a route from another route
I'm trying to run an onCompletion() block on my route, which contains an aggregate definition with completionTimeout. It seems like onCompletion is called before the route is actually completed, since I get log entries from OnCompletion before AggregateTimeoutChecker log entries.
How can I make onComplete wait for aggregation timeout?
Of course I can add a delay greater than completionTimeout to onCompletion, but that will slow down my tests a lot.
My route looks like this:
from(fileEndpoint)
.bean(externalLogger, "start")
.onCompletion()
.bean(externalLogger, "end") // <-- Gets called too early
.end()
.split().tokenize("\n")
.bean(MyBean.class)
.aggregate(header("CamelFileName"), ...)
.completionSize(size)
.completionTimeout(500)
.bean(AggregatesProcessor.class); // <-- some changes here don't arrive
// at onCompletion
onCompletion() is triggered for each incoming exchange when it has completed the route. When using an aggregator, all exchanges not completing the aggregation finish the route at the aggregator, thus your externalLogger gets called for each file being aggregated.
If you want logging after the aggregation you could just call the logger after aggregate().
If you need to distinguish between timeout and completion of your aggregation it could be helpful to provide a custom AggregationStrategy and to also implement the interfaces CompletionAwareAggregationStrategy and TimeoutAwareAggregationStrategy.