Camel check file's last modified date frequently using scheduling service - file

I want to use camel in my project to check a file's last modified date every xx minutes using camel's scheduling/timer service.
I read the document for file component it seems there is a polling function, however there is also a timer component for camel.
Anyone has some code example if i want to do with the requirement?

I would use the file consumer end point.
Something like this:
file:c:/foldername?delay=5000
This will scan the folder every 5 seconds for files and for each file send a message on the route.
You would probably need to store the previous times somewhere such as a text file or database and then compare the modified variable passed in the message to the modified one stored in the database or file.
A rough example of this would look like follows:
<route id="CheckFileRoute">
<from uri="file:d:/RMSInbox?delay=5000&readLock=changed/>
<log message="${ file:modified }/>
<bean ref="CompareDates"/>
</route>
The file consumer will place a lot of information regarding the file in the header such as modified date. Go read this link for more details on the variables in the header http://camel.apache.org/file2.html
The compare dates bean would be java class that acts like a processor which would have a structure like this:
public class CompareDates {
#Handler
public void CheckDates
(
#Body Object msgbody
, #Headers Map hdr
)
{
Date newDate = (Date)hdr.get("CamelFileLastModified");
Date oldDate = readfromfileorDatabase
if(newDate>oldDate)
{
//the date has changed look busy
}
}
Hope this gets you going.

Related

Add a new column and row to existing CSV data in Apache Camel

I am trying to add the new column and row/content to the existing CSV data but unable to achieve this in Apache Camel
I have used camel-csv component in the code and below is the snippet for the same.
<unmarshal>
<csv delimiter="|" useMaps="true" lazyLoad="true" />
</unmarshal>
When unmarshalling, getting "org.apache.camel.dataformat.csv.CsvUnmarshaller$CsvIterator" as class name but unable to get the exchange or cast to any type to this class as this is abstract class.
Let me know if we can use bean component and solution to add the column and content to the existing CSV data.
I can suggest an alternative solution. You can use BeanIO Data Format.
E.g :
BeanIODataFormat dataFormat = new BeanIODataFormat("classpath:beanio/mappings.xml", "ContactsCSV");
from("direct:convert-to-csv")
.marshal(dataFormat)
.to("file:xxxx")
.end();
You can find check data format details in docs. There is an examples file in there as well.

how to implement iTestContext listener without adding to method argument testng(from XML) for PDFBox utility

I am automating a salesforce application using Selenium TestNG. I am implementing a utility using apache PDFBox where i paste all my screenshots into a PDF to make client happy .
My logic is i create screenshots in each method with 1.png , 2.png etc until n.png and paste them in pdf using pdfbox methods.
The problem is my number of screenshots are variable so i implemented iTestContext where i set a variable counter to maximum number pass them to my after method where i retrieve the counter , and those number of screenshots are pasted- something like this
Class Login {
#Test
mymethod(ItestContext context){
commonfunctions.savescreenshot(1.png);
commonfunctions.savescreenshot(2.png);
commonfunctions.savescreenshot(n.png);
context.setAttribute("Counter", "n");
}
#AfterMethod
myaftermethod(){
String PATH = //Path of my test method
String MethodCounter = (String)context.getAttribute("Counter");
PDFUtility.addImagetoPDF(PATH,Integer.parseInt(MethodCounter) );
}
}
The problem is i have many methods that i need to implement and i dont want ITestContext listener as argument to each method.Can i pass it in xml file and use it for all methods?
Hope i have provided all details
If you need to get hold of the current ITestContext object (which is a representation of the current <test> tag being executed), you don't need to pass it as a parameter to your #Test method.
You can get access to it from within a #Test annotated test method via something like this:
org.testng.ITestContext context =
org.testng.Reporter.getCurrentTestResult().getTestContext();
This way you dont need to pass the org.testng.ITestContext object as a parameter to your #Test method.
Can i pass it in xml file and use it for all methods?
No you cannot pass the ITestContext object via the xml file.

Apache Camel: Set 'description' for Processor using Java DSL

I can add a custom id and description to a <log> or most other types of processor in XML like this:
<log message="Hello" id="logId">
<description>Description of logging</description>
</log>
I've tried to do a similar thing with Java DSL:
from("direct:1")
.log("Hello")
.id("logId")
.description("Description of logging")
But the description is applied to the route, not the processor. (The routeId() method is provided to apply the id to the route instead of the processor.)
Looking through the Camel source code, ProcessorDefinition's definition of id() has a bunch of code to apply it to the last block or output, preceded by this comment:
// set it on last output as this is what the user means to do
// for Block(s) with non empty getOutputs() the id probably refers
// to the last definition in the current Block
There is no similar definition of description() in ProcessorDefinition, so this simpler method is inherited from OptionalIdentifiedDefinition:
public T description(String text) {
if (text != null) {
if (description == null) {
description = new DescriptionDefinition();
}
description.setText(text);
}
return (T) this;
}
It seems to me that the Java DSL does not provide a way to set the description for log or other processors. Is this true, or did I miss something?
(If it is true, should I work on a patch to improve the DSL?)

Apache Camel: How to use "done" files to identify records written into a file is over and it can be moved

As the title suggests, I want to move a file into a different folder after I am done writing DB records to to it.
I have already looked into several questions related to this: Apache camel file with doneFileName
But my problem is a little different since I am using split, stream and parallelProcessing for getting the DB records and writing to a file. I am not able to know when and how to create the done file along with the parallelProcessing. Here is the code snippet:
My route to fetch records and write it to a file:
from(<ROUTE_FETCH_RECORDS_AND_WRITE>)
.setHeader(Exchange.FILE_PATH, constant("<path to temp folder>"))
.setHeader(Exchange.FILE_NAME, constant("<filename>.txt"))
.setBody(constant("<sql to fetch records>&outputType=StreamList))
.to("jdbc:<endpoint>)
.split(body(), <aggregation>).streaming().parallelProcessing()
.<some processors>
.aggregate(header(Exchange.FILE_NAME), (o, n) -> {
<file aggregation>
return o;
}).completionInterval(<some time interval>)
.toD("file://<to the temp file>")
.end()
.end()
.to("file:"+<path to temp folder>+"?doneFileName=${file:header."+Exchange.FILE_NAME+"}.done"); //this line is just for trying out done filename
In my aggregation strategy for the splitter I have code that basically counts records processed and prepares the response that would be sent back to the caller.
And in my other aggregate outside I have code for aggregating the db rows and post that writing into the file.
And here is the file listener for moving the file:
from("file://<path to temp folder>?delete=true&include=<filename>.*.TXT&doneFileName=done")
.to(file://<final filename with path>?fileExist=Append);
Doing something like this is giving me this error:
Caused by: [org.apache.camel.component.file.GenericFileOperationFailedException - Cannot store file: <folder-path>/filename.TXT] org.apache.camel.component.file.GenericFileOperationFailedException: Cannot store file: <folder-path>/filename.TXT
at org.apache.camel.component.file.FileOperations.storeFile(FileOperations.java:292)[209:org.apache.camel.camel-core:2.16.2]
at org.apache.camel.component.file.GenericFileProducer.writeFile(GenericFileProducer.java:277)[209:org.apache.camel.camel-core:2.16.2]
at org.apache.camel.component.file.GenericFileProducer.processExchange(GenericFileProducer.java:165)[209:org.apache.camel.camel-core:2.16.2]
at org.apache.camel.component.file.GenericFileProducer.process(GenericFileProducer.java:79)[209:org.apache.camel.camel-core:2.16.2]
at org.apache.camel.util.AsyncProcessorConverterHelper$ProcessorToAsyncProcessorBridge.process(AsyncProcessorConverterHelper.java:61)[209:org.apache.camel.camel-core:2.16.2]
at org.apache.camel.processor.SendProcessor.process(SendProcessor.java:141)[209:org.apache.camel.camel-core:2.16.2]
at org.apache.camel.management.InstrumentationProcessor.process(InstrumentationProcessor.java:77)[209:org.apache.camel.camel-core:2.16.2]
at org.apache.camel.processor.RedeliveryErrorHandler.process(RedeliveryErrorHandler.java:460)[209:org.apache.camel.camel-core:2.16.2]
at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:190)[209:org.apache.camel.camel-core:2.16.2]
at org.apache.camel.processor.Pipeline.process(Pipeline.java:121)[209:org.apache.camel.camel-core:2.16.2]
at org.apache.camel.processor.Pipeline.process(Pipeline.java:83)[209:org.apache.camel.camel-core:2.16.2]
at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:190)[209:org.apache.camel.camel-core:2.16.2]
at org.apache.camel.component.seda.SedaConsumer.sendToConsumers(SedaConsumer.java:298)[209:org.apache.camel.camel-core:2.16.2]
at org.apache.camel.component.seda.SedaConsumer.doRun(SedaConsumer.java:207)[209:org.apache.camel.camel-core:2.16.2]
at org.apache.camel.component.seda.SedaConsumer.run(SedaConsumer.java:154)[209:org.apache.camel.camel-core:2.16.2]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)[:1.8.0_144]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)[:1.8.0_144]
at java.lang.Thread.run(Thread.java:748)[:1.8.0_144]
Caused by: org.apache.camel.InvalidPayloadException: No body available of type: java.io.InputStream but has value: Total number of records discovered: 5
What am I doing wrong? Any inputs will help.
PS: Newly introduced to Apache Camel
I would guess that the error comes from .toD("file://<to the temp file>") trying to write a file, but finds the wrong type of body (String Total number of records discovered: 5 instead of InputStream.
I don't understand why you have one file-destinations inside the splitter and one outside of it.
As #claus-ibsen suggested try to remove this extra .aggregate(...) in your route. To split and re-aggregate it is sufficient to reference the aggregation strategy in the splitter. Claus also pointed to an example in the Camel docs
from(<ROUTE_FETCH_RECORDS_AND_WRITE>)
.setHeader(Exchange.FILE_PATH, constant("<path to temp folder>"))
.setHeader(Exchange.FILE_NAME, constant("<filename>.txt"))
.setBody(constant("<sql to fetch records>&outputType=StreamList))
.to("jdbc:<endpoint>)
.split(body(), <aggregationStrategy>)
.streaming().parallelProcessing()
// the processors below get individual parts
.<some processors>
.end()
// The end statement above ends split-and-aggregate. From here
// you get the re-aggregated result of the splitter.
// So you can simply write it to a file and also write the done-file
.to(...);
However, if you need to control the aggregation sizes, you have to combine splitter and aggregator. That would look somehow like this
from(<ROUTE_FETCH_RECORDS_AND_WRITE>)
.setHeader(Exchange.FILE_PATH, constant("<path to temp folder>"))
.setHeader(Exchange.FILE_NAME, constant("<filename>.txt"))
.setBody(constant("<sql to fetch records>&outputType=StreamList))
.to("jdbc:<endpoint>)
// No aggregationStrategy here so it is a standard splitter
.split(body())
.streaming().parallelProcessing()
// the processors below get individual parts
.<some processors>
.end()
// The end statement above ends split. From here
// you still got individual records from the splitter.
.to(seda:aggregate);
// new route to do the controlled aggregation
from("seda:aggregate")
// constant(true) is the correlation predicate => collect all messages in 1 aggregation
.aggregate(constant(true), new YourAggregationStrategy())
.completionSize(500)
// not sure if this 'end' is needed
.end()
// write files with 500 aggregated records here
.to("...");

camel: split the message into multiple processing paths

My input message:
<file>
<node1>
...
</node1>
.....
<node10>
.....
</node10>
</file>
I want to:
Process the whole file using stylesheet and output to Dest A
For a few elements in the file (say, node1, node3 and node7) I want to extract them and output the content of each individually to Dest B
I know how to process the file using stylesheet but I'm at a loss how to do the other, let alone combine them together.
I'm looking for something like:
from(direct:start).magic_split(
to("xslt:mysheet").to("destA"),
setBody(xpath("//node1").to("destB"),
setBody(xpath("//node3").to("destB"),
setBody(xpath("//node7").to("destB"),
).transform(constant(responseOK);
if you can split the XML, then each node is put in it's own exchange, if you can identify the node after it is split, then you can use a Content Based Router to route the exchange to the appropriate destination. This might require a custom splitter bean, or you might be able to do it from xpath if the nodes are named nodeX where X is a number.
Use the Wire Tap pattern. This pattern allows you to route messages to a separate location while they are being forwarded to the ultimate destination:
from("cxf:bean:submitOrder")
.wireTap("direct:tap")
.beanRef("customBean2");
from("direct:tap")
.to("xslt:my.xsl")
.beanRef("customBean1);
For what I needed, custom beans did the job well. The only thing I had to figure out was how to restore the message to the original content. I guess there is a more elegant way of doing it but works fine:
from("cxf:bean:submitOrder")
.setProperty("originalData", simple("${in.body}")) //save original input msg
.to("xslt:my.xsl").beanRef("customBean1)
.setBody(simple("${property.originalData}")) //restore original message
.beanRef("customBean2");

Resources