I would like to route messages from more routes to the same route but it does not work in the manner as I assumed. I set up the following (I am just puting down the essence):
from("direct:a") [...]
.to("direct:c");
from("direct:b") [...]
.to("direct:c");
from(direct:c) <my aggregator functionality comes here>
.to("direct:someOtherRoute");
However, this works only when exactly one route either "a" or "b" goes to "c" but not both. How should I achieve to route both "a" and "b" to "c"? Thanks.
EDIT1:
I tried the solution of Alexey but using "seda" or "vm" did not solve the problem. Actually, regardless of calling route "c" with seda or vm, the aggregator is invoked only once either from route "a" or from route "b".
However, if I create another route "c2" with the same content and route e.g. "b" to "c2", then it works. Nevertheless, it is not really nice way to solve it.
Do you have any further ideas? I am using the routes within the same CamelContext, so within the same JVM.
I have also found an interesting remark on the link http://camel.apache.org/seda.html
It states as Alexey and Sunar also told that seda and vm are asynchronous and direct synchronous but you can also implement asynchronous functionality with direct as follows:
from("direct:stageName").thread(5).process(...)
"[...] Instead, you might wish to configure a Direct endpoint with a thread pool, which can process messages both synchronously and asynchronously. [...]
I also tested it but in my case it did not yield any fruits.
EDIT2:
I add here how I am using the aggregator, i.e. route "c" in this example:
from("vm:AGGREGATOR").routeId("AGGREGATOR")
.aggregate( constant("AGG"), new RecordAggregator())
.completionTimeout(AGGREGATOR_TIMEOUT)
.process(new Processor() {
public void process(Exchange exchange) throws Exception {
LOGGER.info("### Process AGGREGATOR");
[...]
}
})
.marshal().csv()//.tracing()
.to("file:extract?fileName=${in.header.AGG}.csv")
.end();
In the log the String "### Process Aggregator" appears only once. I am just wondering whether it cannot depend on the .completionTimeout(AGGREGATOR_TIMEOUT) I am using. In my undestanding, a file should be created for each different AGG value in the header within this time. Is this understanding correct?
I think the using of asynchronous components, such as seda, vm, activemq might solve your problem.
Such behavior direct component because direct is synchronous component, this is also likely related to using the aggregator in the third route.
Example:
from("direct:a") [...]
.to("seda:c");
from("direct:b") [...]
.to("seda:c");
from(seda:c) <your aggregator functionality comes here>
.to("direct:someOtherRoute");
EDIT1:
Now, when I see an aggregator, I think that's the problem, in the completion criteria.
In your case, you have to use expression for correlationExpression:
from("vm:AGGREGATOR").routeId("AGGREGATOR")
.aggregate().simple("${header.AGG}",String.class) // ${property.AGG}
.aggregationStrategy(new RecordAggregator())
.completionInterval(AGGREGATOR_TIMEOUT) //.completionTimeout(AGGREGATOR_TIMEOUT)
.forceCompletionOnStop()
.process(new Processor() {
public void process(Exchange exchange) throws Exception {
LOGGER.info("### Process AGGREGATOR");
[...]
}
})
.marshal().csv()//.tracing()
.to("file:extract?fileName=${in.header.AGG}.csv&fileExist=Override")
.end();
and, maybe completionTimeout is too low...
Try the below , this is only a sample
from("timer:foo?repeatCount=1&delay=1000").routeId("firstroute")
.setBody(simple("sundar")).to("direct:a");
from("timer:foo1?repeatCount=1&delay=1000").routeId("secondRoute")
.setBody(simple("sundar1")).to("direct:a");
from("direct:a")
.aggregate(new AggregationStrategy() {
#Override
public Exchange aggregate(Exchange arg0, Exchange arg1) {
Exchange argReturn = null;
if (arg0 == null) {
argReturn= arg1;
}
if (arg1 == null) {
argReturn= arg0;
}
if (arg1 != null && arg0 != null) {
try {
String arg1Str = arg1.getIn()
.getMandatoryBody().toString();
String arg2Str = arg0.getIn()
.getMandatoryBody().toString();
arg1.getIn().setBody(arg1Str + arg2Str);
} catch (Exception e) {
e.printStackTrace();
}
argReturn= arg1;
}
return argReturn;
}
}).constant(true).completionSize(2)
.to("direct:b").end();
from("direct:b").to("log:sundarLog?showAll=true&multiline=true");
You can use seda or other asynchronous routes as pointed out by Yakunin.Using an aggregator here the major contention point would be the completionSize , where i have used 2 , since two routes are sending in the messages.
Related
I've faced with behavior that I can't understand. This issue happens when Split with AggregationStrategy is executed and during one of the iterations, an exception occurs. An exception occurs inside of Splitter in another route (direct endpoint which is called for each iteration). Seems like route execution stops just after Splitter.
Here is sample code.
This is a route that builds one report per each client and collects names of files for internal statistics.
#Component
#RequiredArgsConstructor
#FieldDefaults(level = PRIVATE, makeFinal = true)
public class ReportRouteBuilder extends RouteBuilder {
ClientRepository clientRepository;
#Override
public void configure() throws Exception {
errorHandler(deadLetterChannel("direct:handleError")); //handles an error, adds error message to internal error collector for statistic and writes log
from("direct:generateReports")
.setProperty("reportTask", body()) //at this point there is in the body an object of type ReportTask, containig all data required for building report
.bean(clientRepository, "getAllClients") // Body is a List<Client>
.split(body())
.aggregationStrategy(new FileNamesListAggregationStrategy())
.to("direct:generateReportForClient") // creates report which is saved in the file system. uses the same error handler
.end()
//when an exception occurs during split then code after splitter is not executed
.log("Finished generating reports. Files created ${body}"); // Body has to be List<String> with file names.
}
}
AggregationStrategy is pretty simple - it just extracts the name of the file. If the header is absent it returns NULL.
public class FileNamesListAggregationStrategy extends AbstractListAggregationStrategy<String> {
#Override
public String getValue(Exchange exchange) {
Message inMessage = exchange.getIn();
return inMessage.getHeader(Exchange.FILE_NAME, String.class);
}
}
When everything goes smoothly after splitting there is in the Body List with all file names. But when in the route "direct:generateReportForClient" some exception occurred (I've added error simulation for one client) than aggregated body just contains one less file name -it's OK (everything was aggregated correctly).
BUT just after Split after route execution stops and result that is in the body at this point (List with file names) is returned to the client (FluentProducer) which expects ReportTask as a response body.
and it tries to convert value - List (aggregated result) to ReportTask and it causes org.apache.camel.NoTypeConversionAvailableException: No type converter available to convert from type
Why route breaks after split? All errors were handled and aggregation finished correctly.
PS I've read Camel In Action book and Documentation about Splitter but I haven't found the answer.
PPS project runs on Spring Boot 2.3.1 and Camel 3.3.0
UPDATE
This route is started by FluentProducerTemplate
ReportTask processedReportTask = producer.to("direct:generateReports")
.withBody(reportTask)
.request(ReportTask.class);
The problem is error handler + custom aggregation strategy in the split.
From Camel in Action book (5.3.5):
WARNING When using a custom AggregationStrategy with the Splitter,
it’s important to know that you’re responsible for handling
exceptions. If you don’t propagate the exception back, the Splitter
will assume you’ve handled the exception and will ignore it.
In your code, you use the aggregation strategy extended from AbstractListAggregationStrategy. Let's look to aggregate method in AbstractListAggregationStrategy:
#Override
public Exchange aggregate(Exchange oldExchange, Exchange newExchange) {
List<V> list;
if (oldExchange == null) {
list = getList(newExchange);
} else {
list = getList(oldExchange);
}
if (newExchange != null) {
V value = getValue(newExchange);
if (value != null) {
list.add(value);
}
}
return oldExchange != null ? oldExchange : newExchange;
}
If a first exchange is handled by error handler we will have in result exchange (newExchange) number of properties set by Error Handler (Exchange.EXCEPTION_CAUGHT, Exchange.FAILURE_ENDPOINT, Exchange.ERRORHANDLER_HANDLED and Exchange.FAILURE_HANDLED) and exchange.errorHandlerHandled=true. Methods getErrorHandlerHandled()/setErrorHandlerHandled(Boolean errorHandlerHandled) are available in ExtendedExchange interface.
In this case, your split finishes with an exchange with errorHandlerHandled=true and it breaks the route.
The reason is described in camel exception clause manual
If handled is true, then the thrown exception will be handled and
Camel will not continue routing in the original route, but break out.
To prevent this behaviour you can cast your exchange to ExtendedExchange and set errorHandlerHandled=false in the aggregation strategy aggregate method. And your route won't be broken but will be continued.
#Override
public Exchange aggregate(Exchange oldExchange, Exchange newExchange) {
Exchange aggregatedExchange = super.aggregate(oldExchange, newExchange);
((ExtendedExchange) aggregatedExchange).setErrorHandlerHandled(false);
return aggregatedExchange;
}
The tricky situation is that if you have exchange handled by Error Handler as not a first one in your aggregation strategy you won't face any issue. Because camel will use the first exchange(without errorHandlerHandled=true) as a base for aggregation.
I would like to run a route once and stop the context when the route is completed. Currently I do the usual Thread.sleep(3000) in the main Java class to leave some time for the route to finish but it's obviously not accurate, my route may take 1 second or 20 seconds I can not know in advance.
The Java class:
public static void main(String[] args) throws Exception {
try (ClassPathXmlApplicationContext context = new ClassPathXmlApplicationContext("camel-context.xml")) {
CamelContext camelContext = SpringCamelContext.springCamelContext(context);
// context.start(); // apparently not necessary
camelContext.startRoute("route1");
try { Thread.sleep(3000); } catch (InterruptedException e) { e.printStackTrace(); }
// context.stop(); // apparently not necessary
}
}
The Spring xml:
<route id="route1" autoStartup="false">
<from uri="timer://runOnce?repeatCount=1&delay=3000" />
...
</route>
After reading http://camel.465427.n5.nabble.com/Need-control-back-in-the-Main-routine-so-that-we-can-terminate-JVM-td4483312.html#a4484845 especially the 4th post, from Claus Ibsen, I was thinking of using camelContext.getRouteStatus() in a loop with a Thread.sleep() but wherever I try to get the route status in the code (even after the Thread.sleep(3000)), the status is always "started". I don't know any other way to detect when the route is done.
What is the recommended way to stop the Camel context when a/all route(s) is/are completed, using Spring?
The route will never stop because routes do not have complete state. They can just be started, stopped or paused. A route will always be running if it's in the started state unless you do something to change that.
To accomplish what you are looking for, you can do a couple of things:
You can use the controlbus component and stop the route in the last step of your route. That way you can check (for example the way you mentioned checking for camelContext.getRouteStatus()) when you should stop the context as well.
You can write a small Processor that whenever it receives an Exchange it will stop the camelContext. Once ready, you will add it to the last step of your route.
Camel supports onCompletion callbacks, which can be equivalent to the option above. See the camel page.
Probably, the first option is the easiest for your use case, however I would go for the second option. It seems cleaner to me.
A more elegant way would be to use a synchronisation mechanism provided by Java like CountDownLatch. Main thread will wait for the latch to be opened by the Route thread. Something like :
CountDownLatch latch = new CountDownLatch(1);
camelContext.addRoutes(createRoute(latch));
and somewhere in the createRoute Method add a processor at the end of the route to open the latch. This worked perfectly for me.
.process(new Processor() {
#Override
public void process(Exchange exchange) throws Exception {
latch.countDown();
}
});
In my application I have a generic Camel Route such as the following
from("direct:something").to("direct:outgoing")
and then dynamically in my code I deploy another route:
from("direct:outgoing").process(processor)
When flowing from route 1 to route 2 a new Exchange will be created. Is there an idiomatic way to correlate both? Should I set EXCHANGE.Correlation_ID header on the first route before sending it out?
This should definitely all be processed on the one exchange. Run this test and you'll see the same camel Exchange, with the same properties, etc.
public class CamelExchangeTest {
public static void main(String[] args) throws Exception {
final Processor showExchangeIdProcessor = new Processor() {
#Override
public void process(Exchange exchange) throws Exception {
System.out.println(exchange.getExchangeId());
}
};
Main camelMain = new Main();
camelMain.addRouteBuilder(new RouteBuilder() {
#Override
public void configure() throws Exception {
from("timer:foo?period=1s&repeatCount=1")
.log("excgabge created!")
.process(showExchangeIdProcessor)
.to("direct:outgoing")
;
from("direct:outgoing")
.log("outgoing!")
.process(showExchangeIdProcessor)
;
}
});
camelMain.run();
}
}
Output:
ID-MYPC-55760-1411129552791-0-2
ID-MYPC-55760-1411129552791-0-2
So something else is going on. When you say "direct:outgoing", do you mean exactly that or is it something different - a different component perhaps?
When you say the route is created dynamically, how exactly is that done, and when (and why?)
From the Camel doc:
Some EIP patterns will spin off a sub message, and in those cases, Camel will add a correlation id on the Exchange as a property with they key Exchange.CORRELATION_ID, which links back to the source Exchange. For example the Splitter, Multicast, Recipient List, and Wire Tap EIP does this.
Thus, Exchange.CORRELATION_ID is set by Camel and should not be set by your application. But feel free to set a custom header or property if you need to such as:
exchange.getIn().setProperty("myProperty", myIdentifier);
Hope this doesn't sound ridiculous, but how can I discard a message in Camel on purpose?
Until now, I sent them to the Log-Component, but meanwhile I don't even want to log the withdrawal.
Is there a /dev/null Endpoint in Camel?
You can use the message filter eip to filter out unwanted messages.
http://camel.apache.org/message-filter
There is no dev/null, component.
Also there is a < stop /> you can use in the route, and when a message hit that, it will stop continue routing.
And the closest we got on a dev/null, is to route to a log, where you set logLeve=OFF as option.
With credit to my colleague (code name: cayha)...
You can use the Stub Component as a camel endpoint that is equivalent to /dev/null.
e.g.
activemq:route?abc=xyz
becomes
stub:activemq:route?abc=xyz
Although I am not aware of the inner workings of this component (and if there are dangers for memory leaks, etc), it works for me and I can see no drawbacks in doing it this way.
one can put uri/mock-uri to the config using property component
<camelContext ...>
<propertyPlaceholder id="properties" location="ref:myProperties"/>
</camelContext>
// properties
cool.end=mock:result
# cool.end=result
// route
from("direct:start").to("properties:{{cool.end}}");
I'm a little late to the party but you can set a flag on the exchange and use that flag to skip only that message (by calling stop) if it doesn't meet your conditions.
#Override
public void configure() throws Exception {
from()
.process(new Processor() {
#SuppressWarnings("unchecked")
#Override
public void process(Exchange exchange) throws Exception {
exchange.setProperty("skip", false);
byte[] messageBytes = exchange.getIn().getBody(byte[].class);
if (<shouldNotSkip>) {
} else { //skip
exchange.setProperty("skip", true);
}
}
}).choice()
.when(exchangeProperty("skip").isEqualTo(true))
.stop()
.otherwise()
.to();
}
I am using activemq route and needs to send reply in normal cases, so exchange pattern is InOut. When I configure a filter in the route I find that even it does not pass message to next step, the callback is executed(sending reply), just same as the behavior when calling stop(). And it will send the same message back to reply queue, which is not desirable.
What I do is to change the exchange pattern to InOnly conditionally and stop if I want to filter out the message, so reply is not sent. MAIN_ENDPOINT is a direct:main endpoint I defined to include normal business logic.
from("activemq:queue:myqueue" + "?replyToSameDestinationAllowed=true")
.log(LoggingLevel.INFO, "Correlation id is: ${header.JMSCorrelationID}; will ignore if not null")
.choice()
.when(simple("${header.JMSCorrelationID} == null"))
.to(MAIN_ENDPOINT)
.endChoice()
.otherwise()
.setExchangePattern(ExchangePattern.InOnly)
.stop()
.endChoice()
.end();
Note that this message is also consumed and not in the queue anymore. If you want to preserve the message in the queue(not consuming it), you may just stop() or just filter() so the callback(sending reply which is the original message) works, putting the message back to the queue.
Using only filter() would be much simpler:
from("activemq:queue:myqueue" + "?replyToSameDestinationAllowed=true")
.log(LoggingLevel.INFO, "Correlation id is: ${header.JMSCorrelationID}; will ignore if not null")
.filter(simple("${header.JMSCorrelationID} == null"))
.to(MAIN_ENDPOINT);
I am working on a simple use case that would allow clients to dynamically register for events from a JMS endpoint. My current implementation looks like this:
...
public void addListener(Event event, Listener listener){
try {
camelContext.addRoutes(new RouteBuilder() {
#Override
public void configure() throws Exception {
from(event.from()).bean(listener);
}
});
} catch (Exception exception) {
exception.printStackTrace();
}
}
...
event.from() above would identify the endpoint from which the message would be consumed ("activemq:topic:market.stocks.update.ibm") and listener would be an implementation of the Listener interface.
I had envisaged a typical invocation as:
notifications.addListener(updateEvent, new Listener(){
void listen(){
System.out.println("Hey! Something got updated");
}
});
Except, of course, none of the above works since the camel route expects to have a concrete bean as the recipient and hence camel context fails to start-up.
What is the recommended way of adding bean end points dynamically?
answered on camel-users forum...
http://camel.465427.n5.nabble.com/Observer-Pattern-using-Camel-td4491726.html