Hi I am new to Service mix.Can anyone pointing me in right direction how to read the tables from oracle Db and post that table into a local files.(Here i want to use only blueprint.XML content which i can directly deploy in service mix deploy folder).
Here is an example:
public class JdbcReadFromOracleRoute extends RouteBuilder {
#Override
public void configure() throws Exception {
from("sql:select full_name from common.company where code='1'?delay=3000&dataSource=oracleDataSource&outputType=SelectOne")
.routeId("testOracleRead").autoStartup(true)
.log(LoggingLevel.INFO, "JDBC oracle : ${body}");
.to("file://c:/test/test.txt")
}
}
And you can use pax-jdbc (https://ops4j1.jira.com/wiki/spaces/PAXJDBC/overview) to create datasource (https://ops4j1.jira.com/wiki/display/PAXJDBC/Oracle+driver+adapter).
Once you got the table's record, you can create the file contents using the template engine (e.g. velocity (http://camel.apache.org/velocity.html)) or otherwise and then save the file by the component file (http://camel.apache.org/file2.html).
A simple way is to use Camel's SQL Component.
<route id="foo">
<from uri="sql:select * from foo"/>
</route>
The returned data will be an List of Maps in the Exchange Body that you can then easily iterate through using Camel's Splitter EIP as the next step after the database select:
<split>
<simple>${body}</simple>
</split>
There are a number of ways you could build your file within the <split>. Individual field values can be accessed using simple with ${body[FIELD_NAME_HERE]}
However keep in mind that the exchange is new for each split iteration, so you can't assemble your file contents in anything within the exchange, like a header or property. Instead, I'd probably use an aggregation strategy with the split, known as a Composed Message Processor. That way you can build your file in any format you want, row by row, within the old exchange body. Then after the </split> end, simply use Camel's File component to save the exchange body to a file.
Another option is to execute a Processor after your SQL statement, and the List of Maps in the exchange body will be available to your Java code to iterate through and build your file if you're more comfortable doing it that way. Camel's splitter also has overhead, so if performance is an issue and you're processing thousands of records, this may be the best option.
Related
Requisite disclaimer about being new to Camel--and, frankly, new to developing generally. I'd like to have a string generated as the output of some function be the source of my camel route which then gets written to some file. It's the first part that seems challenging: I have a string, how do I turn it into a message? I can't write it into a file nor can I use JMS. I feel like it should be easy and obvious, but I'm having a hard time finding a simple guide to help.
Some pseudo-code using the Java DSL:
def DesiredString() {return "MyString";}
// A camel route to be implemented elsewhere; I want something like:
class MyRoute() extends RouteBuilder {
source(DesiredString())
.to("file://C:/out/?fileName=MyFileFromString.txt");
}
I vaguely understand using the bean component, but I'm not sure that solves the problem: I can execute my method that generates the string, but how do I turn that into a message? The "vague" is doing a lot of work there: I could be missing something there.
Thanks!
Not sure if I understand your problem. There is a bit of confusion about what the String should be become: the route source or the message body.
However, I guess that you want to write the String returned by your method into a File through a Camel route.
If this is correct, I have to clarify first the route source. A Camel Route normally starts with
from(component:address)
So if you want to receive requests from remote via HTTP it could be
from("http4:localhost:8080")
This creates an HTTP server that listens on port 8080 for messages.
In your case I don't know if the method that returns the String is in the same application as the Camel route. If it is, you can use the Direct component for "method-like" calls in the same process.
from(direct:input)
.to("file:...");
input is a name you can freely choose. You can then route messages to this route from another Camel route or with a ProducerTemplate
ProducerTemplate template = camelContext.createProducerTemplate();
template.sendBody("direct:input", "This is my string");
The sendBody method takes the endpoint where to send the message and the message body. But there are much more variants of sendBody with different signatures depending on what you want to send it (headers etc).
If you want to dive into Camel get a copy of Camel in Action 2nd edition. It contains everything you need to know about Camel.
Example:Sending String(as a body content)to store in file using camel Java DSL:
CamelContext context = new DefaultCamelContext();
context.addRoutes(new RouteBuilder() {
public void configure() {
from("timer:StringSentToFile?period=2000")
.setBody(simple(DesiredString()))
.to("file:file://C:/out/?fileName=MyFileFromString.txt&noop=true")
.log("completed route");
}
});
ProducerTemplate template = context.createProducerTemplate();
context.start();
I have a class to run my route; The input comes from a queue (which is filled by a route that does a query and inserts the rows as messages on the queue)
These messages each contain a few headers:
- pdu_id, basically a prefetch on the filename.
- pad: the path the files reside in
What is to happen: I want the files in the path named by their "pdu_id".* in a tar; After that a REST call is to be done to remove the documents source.
I know a route has a from; but basically I need a route with a dynamic "from", and as below code example shows, queueing froms doesn't do the trick.
The question is what to use instead; I could not find a similar thing, but it can be I didn't use the right google search; in which case I'm deeply sorry.
public class ToDeleteTarAndDeleteRoute extends RouteBuilder {
#Override
public void configure() throws Exception
{
from("broker1:todelete.message_ids.queue")
.from("file:///?fileName=${in.header.pad}${in.header.pdu_id}.*")
.aggregate(new TarAggregationStrategy())
.constant(true)
.completionFromBatchConsumer()
.eagerCheckCompletion()
.to("file:///?fileName=${in.header.pad}${in.header.pdu_id}.tar")
.log("${header.pdu_id} tarred")
.setHeader(Exchange.HTTP_METHOD, constant("DELETE"))
.setHeader("Connection", constant("Close"))
.enrich()
.simple("http:127.0.0.1/restfuldb${header.pdu_id}?httpClient.authenticationPreemptive=true")
.log("${header.pdu_id} tarred and deleted.");
}
}
Yes. Poll enrich can help you in doing it. You should use it something like this:
from("broker1:todelete.message_ids.queue")
.aggregationStrategy(new TarAggregationStrategy())
.pollEnrich()
.simple("file:///?fileName=${in.header.pad}/${in.header.pdu_id}.*")
.unmarshal().string()
.to("file:///?fileName=${in.header.pad}/${in.header.pdu_id}.tar")
.log("${header.pdu_id} tarred")
.setHeader(Exchange.HTTP_METHOD, constant("DELETE"))
.setHeader("Connection", constant("Close"))
.enrich()
.simple("http:127.0.0.1/restfuldb${header.pdu_id}?httpClient.authenticationPreemptive=true")
.log("${header.pdu_id} tarred and deleted.");
Currently the solution to the problem consisted of a few changes based on what #daBigBug answered.
pollEnrich's simple expression uses antInclude rather than fileName;
aggregate is put after pollenrich; as each batch is the set of files, rather than the input from the queue. The input from the queue only provides meta information based on which actions are to be taken.
aggregationStrategy() is not possible in a RouteBuilder; I used aggregate() instead.
I removed the unmarshal(); I don't see why this would be needed; the files can contain binary content.
from("broker1:todelete.message_ids.queue")
.pollEnrich()
.simple("file:${in.header.pad}?antInclude=${in.header.pdu_id}.*")
.aggregate(new TarAggregationStrategy())
.constant(true)
.completionFromBatchConsumer()
.eagerCheckCompletion()
.log("tarring to: ${header.pad}${header.pdu_id}.tar")
.setHeader(Exchange.FILE_NAME, simple("${header.pdu_id}.tar"))
.setHeader(Exchange.FILE_PATH, simple("${header.pad}"))
.to("file://ignored")
...(and the rest of the operations);
I now see the files are getting picked up and even placed in a tar; however, the filename of the tar is unexpected as is the location (it's placed in ./ignored); Also in the rest of the operation, it appears the exchange headers are lost.
If anyone can help figure out how to preserve the headers in a safe way... I'm much obliged. Should I use a new question for that, or should I rephrase the question.
I'm trying to do a file import process where a file is picked up in a subdirectory of a given folder, the subdirectory identifying the client the file is for, then the records are parsed, split, and sent on Hazelcast SEDA queues. I want to process each record as its read off of the Hazelcast SEDA queue, then it returns a status code (created, updated, or errored) which can be aggregated.
I'm also creating a job record when the file is first picked up and I want to update the job record with the final count of created, updated, and errors.
The JobProcessor below creates this record and sets the client Organization and Job objects in headers on the message. The CensusExcelDataFormat reads an Excel file and creates an Employee object for each line, then returns a Collection.
from("file:" + censusDirectory + "?recursive=true").idempotentConsumer(new SimpleExpression("file:name"), idempotentRepository)
.process(new JobProcessor(organizationService, jobService, Job.JobType.CENSUS))
.unmarshal(censusExcelDataFormat)
.split(body(), new ListAggregationStrategy()).parallelProcessing()
.to(ExchangePattern.InOut, "hazelcast:seda:process-employee-import").end()
.process(new JobCompletionProcessor(jobService))
.end();
from("hazelcast:seda:process-employee-import")
.idempotentConsumer(simple("${body.entityId}"), idempotentRepository)
.bean(employeeImporterJob, "importOrUpdate");
The problem I'm having is that the list aggregation happens immediately and instead of getting a list of statuses I'm getting the same list of Employee objects. I want the Employee objects to be sent on the SEDA queue and the return value from the processing on the queue to be aggregated then run through the JobCompletionProcessor to update the Job record.
The behaviour is you are seeing is the default behavior. The apache camel splitter documentation clearly states this in the what the splitter returns section.
Camel 2.2 or older: The Splitter will by default return the last
splitted message.
Camel 2.3 and newer: The Splitter will by default return the
original input message.
For all versions: You can override this by supplying your own
strategy as an AggregationStrategy. There is a sample on this page
(Split aggregate request/reply sample). Notice it's the same
strategy as the Aggregator supports. This Splitter can be viewed as
having a build in light weight Aggregator.
So as you can see you are required to implement your own splitter aggregation strategy. To do this create a new class that implements AggrgationStrategy something like the code below:
public class MyAggregationStrategy implements AggregationStrategy
{
public Exchange aggregate(Exchange oldExchange, Exchange newExchange) {
if (oldExchange == null) //this would be null on the first exchange.
{
//do some work on the first time if needed
}
/*
Here you put your code to calculate failed, updated, created.
*/
}
}
You can then use your custom aggregation strategy by specifying it like the following examples:
.split(body(), new MyAggregationStrategy()) //Java DSL
<split strategyRef="myAggregationStrategy"/> //XML Blueprint
Apache Camel 2.12.1
Is it possible to use the Camel CSV component with a pollEnrich? Every example I see is like:
from("file:somefile.csv").marshal...
Whereas I'm using the pollEnrich, like:
pollEnrich("file:somefile.csv", new CSVAggregator())
So within CSVAggregator I have no csv...I just have a file, which I have to do csv processing myself. So is there a way of hooking up the marshalling to the enrich bit somehow...?
EDIT
To make this more general... eg:
from("direct:start")
.to("http:www.blah")
.enrich("file:someFile.csv", new CSVAggregationStrategy) <--how can I call marshal() on this?
...
public class CSVAggregator implements AggregationStrategy {
#Override
public Exchange aggregate(Exchange oldExchange, Exchange newExchange) {
/* Here I have:
oldExchange = results of http blah endpoint
newExchange = the someFile.csv GenericFile object */
}
Is there any way I can avoid this and use marshal().csv sort of call on the route itself?
Thanks,
Mr Tea
You can use any endpoint in enrich. That includes direct endpoints pointing to other routes. Your example...
Replace this:
from("direct:start")
.to("http:www.blah")
.enrich("file:someFile.csv", new CSVAggregationStrategy)
With this:
from("direct:start")
.to("http:www.blah")
.enrich("direct:readSomeFile", new CSVAggregationStrategy);
from("direct:readSomeFile")
.to("file:someFile.csv")
.unmarshal(myDataFormat);
I ran into the same issue and managed to solve it with the following code (note, I'm using the scala dsl). My use case was slightly different, I wanted to load a CSV file and enrich it with data from an additional static CSV file.
from("direct:start") pollEnrich("file:c:/data/inbox?fileName=vipleaderboard.inclusions.csv&noop=true") unmarshal(csv)
from("file:c:/data/inbox?fileName=vipleaderboard.${date:now:yyyyMMdd}.csv") unmarshal(csv) enrich("direct:start", (current:Exchange, myStatic:Exchange) => {
// both exchange in bodies will contain lists instead of the file handles
})
Here the second route is the one which looks for a file in a specific directory. It unmarshals the CSV data from any matching file it finds and enriches it with the direct route defined in the preceding line. That route is pollEnriching with my static file and as I don't define an aggregation strategy it just replaces the contents of the body with the static file data. I can then unmarshal that from CSV and return the data.
The aggregation function in the second route then has access to both files' CSV data as List<List<String>> instead of just a file.
I created a splitter for an exchange which decompresses the file and splits it on the basis of no of lines (Using Unix command 'split' for it). Returning list of messages containing these parts as messages.
Then set some properties on these as they need to be processed independently. Further the parent exchange needs to be processed after these parts are done. Now, I need a few properties set on the child to be set on the parent too. But the only way I could think of it was re-writing the setProperty part. Is there any way this could be achieved without redundancy.
I did try it another way, i.e., setting properties on the parent and trying to access them on the children isn't working too.
for (String feed: pc.parseUri("{{feedSources}}").split(",")) {
from("{{"+feed +".source}}").routeId(feed)
.setProperty("workDirectory", simple("{{workDirectory}}"))
.setProperty("feedName", simple(feed))
.setProperty("tableName", simple("{{"+feed+".tableName}}"))
.setProperty("options", simple("{{"+feed+".options}}"))
.split(beanExpression(new FileSplitter(), "split"))
.setProperty("dateFormat", simple("{{" + feed + ".dateFormat}}"))
.setProperty("headerFormat", simple("{{" + feed + ".headerFormat}}"))
.process(FileKeyProcessorFactory.getProcessor(feed))
.to("{{"+feed+".destination}}")
.end()
.process(new RSProcessor());
There are a few more properties to be set. Rewriting the code doesn't seems nice. What else could be the option.
Use an AggregationStrategy on the Splitter to merge in changes from each splitted message into the outgoing message of the parent splitter.
You can read more about this at Camel Split EIP documentation and at other EIPs which support the AggregationStrategy as well.
For example:
<beans xmlns="http://www.springframework.org/schema/beans">
<bean id="groupExchangeAggregationStrategy"
class="org.apache.camel.processor.aggregate.GroupedExchangeAggregationStrategy"
/>
</beans>
<split strategyRef="groupExchangeAggregationStrategy">
<xpath>//</xpath>
</split>