How can Apache Camel be used to monitor file changes? - file

I would like to monitor all of the files in a given directory for changes, ie an updated timestamp. This use case seems natural for Camel using the file component, but I can't seem to find a way to configure this behavior.
A uri like:
file:/some/directory
will consume the files in the provided directory but will delete them.
A uri like:
file:/some/directory?noop=true
consumes each file once when it is added or when the route is started.
It's surprising that there isn't an option along the lines of
consumeOnChange=true
Is there a straightforward way to monitor file changes and not delete the file after consuming?

You can do this by setting up the idempotentKey to tell Camel how a file is considered changed. For example if the file size changes, or its timestamp changes etc.
See more details at the Camel file documentation at: https://camel.apache.org/components/latest/file-component.html
See the section Avoiding reading the same file more than once (idempotent consumer). And read about idempotent and idempotentKey.
So something alike
from("file:/somedir?noop=true&idempotentKey=${file:name}-${file:size}")
Or
from("file:/somedir?noop=true&idempotentKey=${file:name}-${file:modified}")
You can read here about the various ${file:xxx} tokens you can use: http://camel.apache.org/file-language.html

Setting noop to true will result in Camel setting idempotent=true as well, despite the fact that idempotent is false by default.
Simplest solution to monitor files would be:
.from("file:path?noop=true&idempotent=false&delay=60s")
This will monitor changes to all files in the given directory every one minute.
This can be found in the Camel documentation at: http://camel.apache.org/file2.html.

I don't think Camel supports that specific feature but with the existent options you can come up with a similar solution of monitoring a directory.
What you need to do is set a small delay value to check the directory and maintain a repository of the already read files. Depending on how you configure the repository (by size, by filename, by a mix of them...) this solution would be able to provide you information about news files and modified files. As a caveat it would be consuming the files in the directory very often.
Maybe you could use other solutions different from Camel like Apache Commons VFS2 (I wrote a explanation about how to use it for this scenario: WatchService locks some files?

I faced the same problem i.e. wanted to copy updated files also (along with new files). Below is my configuration,
public static void main(String[] a) throws Exception {
CamelContext cc = new DefaultCamelContext();
cc.addRoutes(createRouteBuilder());
cc.start();
Thread.sleep(10 * 60 * 1000);
cc.stop();
}
protected static RouteBuilder createRouteBuilder() {
return new RouteBuilder() {
public void configure() {
from("file://D:/Production"
+ "?idempotent=true"
+ "&idempotentKey=${file:name}-${file:size}"
+ "&include=.*.log"
+ "&noop=true"
+ "&readLock=changed")
.to("file://D:/LogRepository");
}
};
}
My testing steps:
Run the program and it copies few .log files from D:/Production to D:/LogRepository and then continues to poll D:/Production directory
I opened a already copied log say A.log from D:/Production (since noop=true nothing is moved) and edited it with some editor tool. This doubled the file size and save it.
At this point I think Camel is supposed to copy that particular file again since its size is modified and in my route definition I used "idempotent=true&idempotentKey=${file:name}-${file:size}&readLock=changed". But camel ignores the file.
When I use TRACE for logging it says "Skipping as file is already in progress...", but I did not find any lock file in D:/Production directory when I editted and saved the file.
I also checked that camel still ignores the file if I replace A.log (with same name but bigger size) in D:/Production directory from outside.
But I found, everything is working as expected if I remove noop=true option.
Am I missing something?

If you want monitor file changes in camel, use file-watch component.
Example -> RECURSIVE WATCH ALL EVENTS (FILE CREATION, FILE DELETION, FILE MODIFICATION):
from("file-watch://some-directory")
.log("File event: ${header.CamelFileEventType} occurred on file ${header.CamelFileName} at ${header.CamelFileLastModified}");
You can see the complete documentation here:
Camel file-watch component

Related

Access Shared Preferences externally / Store a value into a new file and access it externally

I have the two following methods and I am using them to store a special value locally and be able to access it on application restart:
(Store value locally:)
private void SaveSet(string key, string value)
{
ISharedPreferences prefs = PreferenceManager.GetDefaultSharedPreferences(this);
ISharedPreferencesEditor prefEditor = prefs.Edit();
prefEditor.PutString(key, value);
// editor.Commit(); // applies changes synchronously on older APIs
prefEditor.Apply(); // applies changes asynchronously on newer APIs
}
(Read it again:)
private string RetrieveSet(string key)
{
ISharedPreferences prefs = PreferenceManager.GetDefaultSharedPreferences(this);
return prefs.GetString(key, null);
}
This works perfectly. Now is it possible to access and edit this Shared Preferences externally? Unfortunately, I cannot find any file when searching in folder
Phone\Android\data\com.<company_name>.<application_name>\files
nor anywhere else. I want / try to edit this value from my computer, after connecting the phone to it. Is this possible?
Alternatively: Can anyone maybe show me how to create a new file in the given path above, write/read it programmatically and how it stays there, even if application is closed / started again? So I can then edit this file with my computer anyhow?
I tried it with the following code, but unfortunately it doesn't work / no file is created or at least i cannot see it in the given path above:
//"This code snippet is one example of writing an integer to a UTF-8 text file to the internal storage directory of an application:"
public void SaveValueIntoNewFile(int value)
{
var backingFile = Path.Combine(System.Environment.GetFolderPath(System.Environment.SpecialFolder.Personal), "newFile.txt");
using (var writer = System.IO.File.CreateText(backingFile))
{
writer.WriteLine(value.ToString());
}
}
Would be very happy about every answer, thanks in advance and best regards
What you're looking for is where Android stores the Shared Preference file for applications that make use of it's default PreferenceManager.
I'd refer to this SO post which answers your question pretty well
SharedPreferences are stored in an xml file in the app data folder,
i.e.
/data/data/YOUR_PACKAGE_NAME/shared_prefs/YOUR_PREFS_NAME.xml
or the default preferences at:
/data/data/YOUR_PACKAGE_NAME/shared_prefs/YOUR_PACKAGE_NAME_preferences.xml
SharedPreferences added during runtime are not stored in the Eclipse
project.
Note: Accessing /data/data/ requires superuser
privileges
A simple method is to use Android Device Monotor,you can open it by clicking Tools--> android-->Android Device Monotor...
For example:
The path in my device is as follows:
/data/data/YOUR_PACKAGE_NAME/shared_prefs/YOUR_PACKAGE_NAME_preferences.xml
And we notice three buttons in the upper right corner of the picture.
The first one is used toPull a file from the device,the second one is used to Push a file onto the device,and the last one is used to delete the preferences.xml file.
So we can pull the preferences.xml file from current device to our computer and edit it as we want, and then push the updated preferences.xml to the folder again.Then we will get the value of preferences.xml file .

Camel File Consumer when folder is not accessible

Hy all,
when using the file consumer on a folder, which the camel context does not have access to (e.g. using chmod a-rwx test/locked) the context starts as normal (not even warnings) but also doesn't pick up any files in the folder (well, how should it).
Is there a way to have the file component thrown an error on startup (or at least on polling the folder), when it can't access the folder?
Otherwise I will have to check it using manual java code before configuring the route, which I would like to avoid.
Thanks and greets
Chris
Otherwise I will have to check it using manual java code before configuring the route, which I would like to avoid.
You can use a PollingConsumerPollStrategy with the endpoint to check the whether the path has access issues, then raise error.
public class FileAccessPollingConsumerPollStrategy extends DefaultPollingConsumerPollStrategy{
public boolean begin(Consumer consumer, Endpoint endpoint) {
File pollingDirectory = ((FileEndpoint) endpoint).getFile();
//check your access with poillingDirectory and raise any error.
return true;
}
}
Then configure your camel endpoint - from("file://inbox/?pollStrategy=#fileAccessPollingConsumerPollStrategy")
I settled on executing the following code before the route is configured and thus making sure it can be accessed:
Path path = Paths.get(folderPath);
// The only case we do not allow is when the folder exists and is not readable
boolean canAccess = !Files.exists(path) || Files.isReadable(path);
if (!canAccess) {
throw new IllegalArgumentException("Could not initialize route, because folder \""
+ baseFolder + "\" can not be accessed. This might happen when it does not exist "
+ "or there are no read permission for the current user");
}
I also created an issue in the apache jira to add the feature in the future:
https://issues.apache.org/jira/browse/CAMEL-13483
Thanks everbody for the suggestions
Chris

Camel: how to aggregate files based on exchange in pattern

I have a class to run my route; The input comes from a queue (which is filled by a route that does a query and inserts the rows as messages on the queue)
These messages each contain a few headers:
- pdu_id, basically a prefetch on the filename.
- pad: the path the files reside in
What is to happen: I want the files in the path named by their "pdu_id".* in a tar; After that a REST call is to be done to remove the documents source.
I know a route has a from; but basically I need a route with a dynamic "from", and as below code example shows, queueing froms doesn't do the trick.
The question is what to use instead; I could not find a similar thing, but it can be I didn't use the right google search; in which case I'm deeply sorry.
public class ToDeleteTarAndDeleteRoute extends RouteBuilder {
#Override
public void configure() throws Exception
{
from("broker1:todelete.message_ids.queue")
.from("file:///?fileName=${in.header.pad}${in.header.pdu_id}.*")
.aggregate(new TarAggregationStrategy())
.constant(true)
.completionFromBatchConsumer()
.eagerCheckCompletion()
.to("file:///?fileName=${in.header.pad}${in.header.pdu_id}.tar")
.log("${header.pdu_id} tarred")
.setHeader(Exchange.HTTP_METHOD, constant("DELETE"))
.setHeader("Connection", constant("Close"))
.enrich()
.simple("http:127.0.0.1/restfuldb${header.pdu_id}?httpClient.authenticationPreemptive=true")
.log("${header.pdu_id} tarred and deleted.");
}
}
Yes. Poll enrich can help you in doing it. You should use it something like this:
from("broker1:todelete.message_ids.queue")
.aggregationStrategy(new TarAggregationStrategy())
.pollEnrich()
.simple("file:///?fileName=${in.header.pad}/${in.header.pdu_id}.*")
.unmarshal().string()
.to("file:///?fileName=${in.header.pad}/${in.header.pdu_id}.tar")
.log("${header.pdu_id} tarred")
.setHeader(Exchange.HTTP_METHOD, constant("DELETE"))
.setHeader("Connection", constant("Close"))
.enrich()
.simple("http:127.0.0.1/restfuldb${header.pdu_id}?httpClient.authenticationPreemptive=true")
.log("${header.pdu_id} tarred and deleted.");
Currently the solution to the problem consisted of a few changes based on what #daBigBug answered.
pollEnrich's simple expression uses antInclude rather than fileName;
aggregate is put after pollenrich; as each batch is the set of files, rather than the input from the queue. The input from the queue only provides meta information based on which actions are to be taken.
aggregationStrategy() is not possible in a RouteBuilder; I used aggregate() instead.
I removed the unmarshal(); I don't see why this would be needed; the files can contain binary content.
from("broker1:todelete.message_ids.queue")
.pollEnrich()
.simple("file:${in.header.pad}?antInclude=${in.header.pdu_id}.*")
.aggregate(new TarAggregationStrategy())
.constant(true)
.completionFromBatchConsumer()
.eagerCheckCompletion()
.log("tarring to: ${header.pad}${header.pdu_id}.tar")
.setHeader(Exchange.FILE_NAME, simple("${header.pdu_id}.tar"))
.setHeader(Exchange.FILE_PATH, simple("${header.pad}"))
.to("file://ignored")
...(and the rest of the operations);
I now see the files are getting picked up and even placed in a tar; however, the filename of the tar is unexpected as is the location (it's placed in ./ignored); Also in the rest of the operation, it appears the exchange headers are lost.
If anyone can help figure out how to preserve the headers in a safe way... I'm much obliged. Should I use a new question for that, or should I rephrase the question.

Spring Batch FlatFileItemWriter does not write data to a file

I am new to Spring Batch application. I am trying to use FlatFileItemWriter to write the data into a file. Challenge is application is creating the file on a given path, but, now writing the actual content into it.
Following are details related to code:
List<String> dataFileList : This list contains the data that I want to write to a file
FlatFileItemWriter<String> writer = new FlatFileItemWriter<>();
writer.setResource(new FileSystemResource("C:\\Desktop\\test"));
writer.open(new ExecutionContext());
writer.setLineAggregator(new PassThroughLineAggregator<>());
writer.setAppendAllowed(true);
writer.write(dataFileList);
writer.close();
This is just generating the file at proper place but contents are not getting written into the file.
Am I missing something? Help is highly appreciated.
Thanks!
This is not a proper way to use Spring Batch Writer and writer data. You need to declare bean of Writer first.
Define Job Bean
Define Step Bean
Use your Writer bean in Step
Have a look at following examples:
https://github.com/pkainulainen/spring-batch-examples/blob/master/spring-boot/src/main/java/net/petrikainulainen/springbatch/csv/in/CsvFileToDatabaseJobConfig.java
https://spring.io/guides/gs/batch-processing/
You probably need to force a sync to disk. From the docs at https://docs.spring.io/spring-batch/trunk/apidocs/org/springframework/batch/item/file/FlatFileItemWriter.html,
setForceSync
public void setForceSync(boolean forceSync)
Flag to indicate that changes should be force-synced to disk on flush. Defaults to false, which means that even with a local disk changes could be lost if the OS crashes in between a write and a cache flush. Setting to true may result in slower performance for usage patterns involving many frequent writes.
Parameters:
forceSync - the flag value to set

Hadoop Map Whole File in Java

I am trying to use Hadoop in java with multiple input files. At the moment I have two files, a big one to process and a smaller one that serves as a sort of index.
My problem is that I need to maintain the whole index file unsplitted while the big file is distributed to each mapper. Is there any way provided by the Hadoop API to make such thing?
In case if have not expressed myself correctly, here is a link to a picture that represents what I am trying to achieve: picture
Update:
Following the instructions provided by Santiago, I am now able to insert a file (or the URI, at least) from Amazon's S3 into the distributed cache like this:
job.addCacheFile(new Path("s3://myBucket/input/index.txt").toUri());
However, when the mapper tries to read it a 'file not found' exception occurs, which seems odd to me. I have checked the S3 location and everything seems to be fine. I have used other S3 locations to introduce the input and output file.
Error (note the single slash after the s3:)
FileNotFoundException: s3:/myBucket/input/index.txt (No such file or directory)
The following is the code I use to read the file from the distributed cache:
URI[] cacheFile = output.getCacheFiles();
BufferedReader br = new BufferedReader(new FileReader(cacheFile[0].toString()));
while ((line = br.readLine()) != null) {
//Do stuff
}
I am using Amazon's EMR, S3 and the version 2.4.0 of Hadoop.
As mentioned above, add your index file to the Distributed Cache and then access the same in your mapper. Behind the scenes. Hadoop framework will ensure that the index file will be sent to all the task trackers before any task is executed and will be available for your processing. In this case, data is transferred only once and will be available for all the tasks related your job.
However, instead of add the index file to the Distributed Cache in your mapper code, make your driver code to implement ToolRunner interface and override the run method. This provides the flexibility of passing the index file to Distributed Cache through the command prompt while submitting the job
If you are using ToolRunner, you can add files to the Distributed Cache directly from the command line when you run the job. No need to copy the file to HDFS first. Use the -files option to add files
hadoop jar yourjarname.jar YourDriverClassName -files cachefile1, cachefile2, cachefile3, ...
You can access the files in your Mapper or Reducer code as below:
File f1 = new File("cachefile1");
File f2 = new File("cachefile2");
File f3 = new File("cachefile3");
You could push the index file to the distributed cache, and it will be copied to the nodes before the mapper is executed.
See this SO thread.
Here's what helped me to solve the problem.
Since I am using Amazon's EMR with S3, I have needed to change the syntax a bit, as stated on the following site.
It was necessary to add the name the system was going to use to read the file from the cache, as follows:
job.addCacheFile(new URI("s3://myBucket/input/index.txt" + "#index.txt"));
This way, the program understands that the file introduced into the cache is named just index.txt. I also have needed to change the syntax to read the file from the cache. Instead of reading the entire path stored on the distributed cache, only the filename has to be used, as follows:
URI[] cacheFile = output.getCacheFiles();
BufferedReader br = new BufferedReader(new FileReader(#the filename#));
while ((line = br.readLine()) != null) {
//Do stuff
}

Resources