Camel ver 2.17.3: I want to insert a splitter into a route so that split messages remain split. If I have a "direct" route with a splitter, when control returns from the inner route, I no longer have split messages, only the original.
from("direct:in")
.transform(constant("A,B,C"))
.inOut("direct:inner")
.log("RET-VAL: ${in.body}");
from("direct:inner")
.split()
.tokenize(",")
.log("AFTER-SPLIT ${in.body}")
;
Based on the answer to a similar question, and Claus's comment below, I tried inserting my own aggregator and always marking the group "COMPLETE". Only the last (split) message is being returned to the outer route.
from("direct:in")
.transform(constant("A,B,C"))
.inOut("direct:inner")
.log("RET-VAL: ${in.body}");
from("direct:inner")
.split(body().tokenize(","), new MyAggregationStrategy())
.log("AFTER-SPLIT ${in.body}")
;
public static class MyAggregationStrategy implements AggregationStrategy
{
#Override
public Exchange aggregate(Exchange oldExchange, Exchange newExchange) {
System.out.println("Agg called with:"+newExchange.getIn().getBody());
newExchange.setProperty(Exchange.AGGREGATION_COMPLETE_CURRENT_GROUP, true);
return newExchange;
}
}
How do I get the messages to stay split, regardless of how routes are nested etc.?
See this EIP
http://camel.apache.org/composed-message-processor.html
with the splitter only example.
And in the AggregationStrategy you combine together all those splitted sub-messages into one message which is the result you want, eg the outgoing message of the splitter when its done. How you do that depends on your messages and what you want to keep. For example you can put together the sub messages in a List or maybe its XML based and you can append the XML fragments, or something.
Related
First, I'm fairly new to Camel so if what (or how) I'm trying to do here is dumb, let me know.
CODE:
from("direct:one")
.to("mock:two")
.process(new Processor(){
#Override
public void process(Exchange exchange)throws Exception{
MyCustomObject obj = exchange.getIn().getBody(MyCustomObject.class);
exchange.getOut().setBody(obj.getOneOfTheFields());
}
})
.to("mock:three");
QUESTION:
This processor transforms an object to one of it's fields. I know that I could replace it with simple expression but that would require me to put 'oneOfTheFields' in a string and I don't want to do that.
Is there a shorter way to do this using java code only?
This can be easily achieved using setBody and Camel simple:
from("direct:one")
.to("mock:two")
.setBody(simple("${body.fieldName}"))
.to("mock:three");
You specify the name of the field, and Camel will use the standard accessor mechanism to set the body appropriately.
Can you not simply do this:
from("direct:one")
.to("mock:two")
.setBody(body().getOneOfTheFields())
.to("mock:three");
Let me know if this works.
I have a file with over 3 million pipe-delimited rows that I want to insert into a database. Its a simple table (no normalisation required)
Setting up the route to watch for the file, read it in using streaming mode and split the lines is easy. Inserting rows into the table will also be a simple wiring job.
Question is: how can I do this using batched inserts? Lets say that 1000 rows is optimal.. given that the file is streamed how would the SQL component know that the stream had finished. Lets say the file had 3,000,001 records. How can I set Camel up to insert the last stray record?
Inserting the lines one at a time can be done - but this will be horribly slow.
I would recommend something like this:
from("file:....")
.split("\n").streaming()
.to("any work for individual level")
.aggregate(body(), new MyAggregationStrategy().completionSize(1000).completionTimeout(50)
.to(sql:......);
I didn't validate all the syntax, but the plan would be to grab the file split it with streams, then aggregate groups of 1000 and have a timeout to catch that last smaller group. Those aggregated groups could simply make the body a list of strings or whatever format you will need for your batch sql insert.
Here is more accurate example:
#Component
#Slf4j
public class SQLRoute extends RouteBuilder {
#Autowired
ListAggregationStrategy aggregationStrategy;
#Override
public void configure() throws Exception {
from("timer://runOnce?repeatCount=1&delay=0")
.to("sql:classpath:sql/orders.sql?outputType=StreamList")
.split(body()).streaming()
.aggregate(constant(1), aggregationStrategy).completionSize(1000).completionTimeout(500)
.to("log:batch")
.to("google-bigquery:google_project:import:orders")
.end()
.end();
}
#Component
class ListAggregationStrategy implements AggregationStrategy {
public Exchange aggregate(Exchange oldExchange, Exchange newExchange) {
List rows = null;
if (oldExchange == null) {
// First row ->
rows = new LinkedList();
rows.add(newExchange.getMessage().getBody());
newExchange.getMessage().setBody(rows);
return newExchange;
}
rows = oldExchange.getIn().getBody(List.class);
Map newRow = newExchange.getIn().getBody(Map.class);
log.debug("Current rows count: {} ", rows.size());
log.debug("Adding new row: {}", newRow);
rows.add(newRow);
oldExchange.getIn().setBody(rows);
return oldExchange;
}
}
}
This can be done using the Camel-Spring-batch component. http://camel.apache.org/springbatch.html , the volume of commit per step can be defined by the commitInterval and the orchestration of the job is defined in a spring config. It works quite for well for usecases similar to your requirement.
Here's a nice example from github : https://github.com/hekonsek/fuse-pocs/tree/master/fuse-pocs-springdm-springbatch/fuse-pocs-springdm-springbatch-bundle/src/main
I created a route to buffer/store marshaled objects (json) into files. This route (and the other route to read the buffer) work fine.
storing in buffer:
from(DIRECT_IN).marshal().json().marshal().gzip().to(fileTarget());
reading from buffer:
from(fileTarget()).unmarshal().gzip().unmarshal().json().to("mock:a")
To reduce i/o i want to aggregate many exchanges in one file. I tried to just aggregate after json and before it so i added this after json() or from(...):
.aggregate(constant(true)).completionSize(20).completionTimeout(1000).groupExchanges()
In both cases i get conversion exceptions. How to do it correctly? I would prefer a way without custom aggregator. And it would be nice if just many exchanges/object are aggregated in one json (as list of objects) or in one text file - one json object per line.
Thanks in advance.
Meanwhile i added a simple aggreagtor:
public class LineAggregator implements AggregationStrategy {
#Override
public final Exchange aggregate(final Exchange oldExchange, final Exchange newExchange) {
//if first message of aggregation
if (oldExchange == null) {
return newExchange;
}
//else aggregate
String oldBody = oldExchange.getIn().getBody(String.class);
String newBody = newExchange.getIn().getBody(String.class);
String aggregate = oldBody + System.lineSeparator() + newBody;
oldExchange.getIn().setBody(aggregate);
return oldExchange;
}
}
The routes look like that, to buffer:
from(...)// marshal objects to json
.marshal()
.json()
.aggregate(constant(true), lineAggregator)
.completionSize(BUFFER_PACK_SIZE)
.completionTimeout(BUFFER_PACK_TIMEOUT)
.marshal()
.gzip()
.to(...)
from buffer:
from(...).unmarshal()
.gzip()
.split()
.tokenize("\r\n|\n|\r")
.unmarshal()
.json()
.to(....)
But the question remains, is the aggregator necessary?
I have a requirement to design a RESTful Service using RESTEasy. Clients can call this common service with any number of Query Parameters they would want to. My REST code should be able to read these Query Params in some way. For example if I have a book search service, clients can make the following calls.
http://domain.com/context/rest/books/searchBook?bookName=someBookName
http://domain.com/context/rest/books/searchBook?authorName=someAuthor& pubName=somePublisher
http://domain.com/context/rest/books/searchBook?isbn=213243
http://domain.com/context/rest/books/searchBook?authorName=someAuthor
I have to write a service class like below to handle this.
#Path("/books")
public class BookRestService{
// this is what I currently have, I want to change this method to in-take all the
// dynamic parameters that can come
#GET
#Path("/searchBook")
public Response searchBook(#QueryParam("bookName") String bookName,#QueryParam("isbn") String isbn) {
// fetch all such params
// create a search array and pass to backend
}
#POST
#Path("/addBook")
public Response addBook(......) {
//....
}
}
Sorry for the bad format (I couldn't get how code formatting works in this editor!). As you can see, I need to change the method searchBook() so that it will take any number of query parameters.
I saw a similar post here, but couldn't find the right solution.
How to design a RESTful URL for search with optional parameters?
Could any one throw some light on this please?
The best thing to do in this case would be using a DTO containing all the fields of your search criteria. For example, you mentioned 4 distinct parameters.
Book Name (bookName)
Author Name (authorName)
Publisher Name (pubName)
ISBN (isbn)
Create a DTO containing the fields having the following annotations for every property you want to map the parameters to:
public class CriteriaDTO{
#QueryParam("isbn")
private String isbn;
.
.
Other getter and setters of other properties
}
Here is a method doing that for your reference:
#GET
#Produces("application/json")
#Path("/searchBooks")
public ResultDTO search(#Form CriteriaDTO dto){
}
using following URL will populate the CriteriaDTO's property isbn automatically:
your.server.ip:port/URL/Mapping/searchBooks?isbn=123456789&pubName=testing
A similar question was asked here: How do you map multiple query parameters to the fields of a bean on Jersey GET request?
I went with kensen john's answer (UriInfo) instead. It allowed to just iterate through a set to check which parameters were passed.
I need to poll a directory and narrow the files with a case insentive expression.
With version 2.10 camel adds support for antInclude which is what I look into, unfortunately antInclude is case sensitive, so are other filtering expressions. Implementing GenericFileFilter is not an option, since the filtering patterns are not known at compile time as I read them from database at runtime and I have multiple file rules each with a different pattern.
I programmatically create several routes in a loop, where each file route has a different case insensitive filtering pattern. I would appreciate if camel file component supports case insensitive expressions, or is there any other way without creating myself a new file component in camel?
public class MyRouter extends RouteBuilder {
#Override
public void configure() throws Exception {
Vector<FileTransferEntity> list = TransferDAO.getTransferList();
for(FileTransferEntity t : list) {
fromF("ftp://ftpuser#ftpserver/some-directory?antInclude=%s", t.getFileMask()).
toF("mock:result");//depending on t, action will change.
}
}
should be able to use a custom filter instead...see camel-file2 for information or see this example...
https://svn.apache.org/repos/asf/camel/trunk/camel-core/src/test/java/org/apache/camel/component/file/FileConsumerFileFilterTest.java