Apache Camel File Append Not Working in Windows - apache-camel

I have a simple route where I write some string to an output file and then trying to append the contents of the original file. But it ignores and it overwrites the file.
from("file://inputFolder")
.routeId("InputFolderToTestSedaRoute")
.setProperty("myFileConsumedBody", simple("${body}"))
.setBody(constant("FIRST LINE!"))
.to("file://{{outputFolder}}")
.setBody(simple("${exchangeProperty.myFileConsumedBody}"))
.log("*** STEP 100: ${headers} :***")
.delay(10000)
.to("file://outputFolder?fileExist=Append")
;
I added delay to observe what happens.
If there is an input file named myFile.txt, Camel picks that file as expected.
It keeps the file to an custome exchange property as in the code.
It opens a file named myFile.txt and writes the content "FIRST LINE!" in it and waits for the delay to expire.
I can open and verify the contents, everything looks good.
Once delay expires, Camel overwrites the file myFile.txt with the original content it picked from input folder (even though I have asked Camel to append).
Am I doing any mistake here? Not sure if this is specific to Windows 10. I am using Camel version 2.24.1. Thanks for your time.

This is bug CAMEL-14127 fixed in version 2.24.3. You can upgrade, or use workaroud with charset option.
.to("file://outputFolder?fileExist=Append&charset=utf-8")

Related

Using camel-smb SMB picks up (large) files while still be written to

When trying to create cyclic moving of files encountered strange behavior with readLock. Create a large file (some 100Mb's) and transfer it using SMB from out to in folder.
FROM:
smb2://smbuser:****#localhost:4455/user/out?antInclude=FILENAME*&consumer.bridgeErrorHandler=true&delay=10000&inProgressRepository=%23inProgressRepository&readLock=changed&readLockMinLength=1&readLockCheckInterval=1000&readLockTimeout=5000&streamDownload=true&username=smbuser&delete=true
TO:
smb2://smbuser:****#localhost:4455/user/in?username=smbuser
Create another flow to move the file back from IN to OUT folder. After some transfers the file will be picked up while still being written to by another route and a transfer will be done with a much smaller file, resulting in a partial file at the destination.
FROM:
smb2://smbuser:****#localhost:4455/user/in?antInclude=FILENAME*&delete=true&readLock=changed&readLockMinLength=1&readLockCheckInterval=1000&readLockTimeout=5000&streamDownload=false&delay=10000
TO:
smb2://smbuser:****#localhost:4455/user/out
Question is: why my readLock is not working properly (p.s. streamDownload is required)?
UPDATE: turns out this only happens on windows samba share, and with streamDownload=true. So, something with stream chunking. Any advice welcome.
The solution requires to prevent polling strategy from automatically picking up a file, and become aware of readLock (in progress) on the other side. So I lowered delay to 5 seconds and in FROM part, on both sides, I added readLockMinAge to 5s which will inspect file modification time.
Since streaming goes for every second this is enough time to prevent read lock.
An explanation of why the previously mentioned situation happens:
When a route prepares to pick-up from out folder, a large file (1GB) is in progress chunk by chunk to in folder. At the end of the streaming file is marked for
removal by camel-smbj and file receive status STATUS_DELETE_PENDING.
Now another part of this process starts to send a newly arrived file to the out folder and finds out that this file already exists. Because of the default fileExists=Override strategy
It tries to delete (afterward store) an existing file (which is still not deleted from the previous step) and receives an exception which causes some InputStream
chunks to be lost.

Camel File Consumer - leave file after processing but accept files with same name

So this is the situation:
I have a workflow, that waits for files in a folder, processes them and then sends them to another system.
For different reasons we use an ActiveMQ Broker between "sub-processes" in the workflow, where each route alters the message in some way before it is sent in the last step. Each "sub-processes" only reads and writes to/from the ActiveMQ, except the first and last route.
It is also part of the workflow, that there is a route after sending the message, that takes care of initial file, moving or deleting it. Only this route knows that to do with the file.
This means, that the file has to stay in the folder after the consumer-route has finished, because the meta-data is just written into the ActiveMQ, but the actual workflow is not done yet.
It got this to work using the noop=true parameter on the file consumer.
The problem with this is, that after the "After Sending Route" deletes (or moves) the file, the file consumer will not react to new files with the same name again until I restart the route.
It is clear, that this is the expected and correct behavior, because its the point of the noop parameter to ignore a file that was consumed before, but this doesn´t help me.
The question is now how I get the file consumer to only process a file once as long as it is present in the folder, but "forget" about it as soon as some other process (in this case a different route) removes the file.
As an alternative I could let the file component move the file into a temp folder, from where it gets processed later and leave the cosuming folder empty, but this introduces new problems, that I'd like to avoid (e.g. moving a file with the same name into the folder, as long as the first one is not yet processed completely)
I'd love to hear some ideas on how to handle that case.
Greets Chris
You need to tell Camel to not only use the filename for idempotency checking.
In a similar situation, where I wanted to pick up changes to a file that was otherwise no-oped, I have the option
idempotentKey=${file:name}-${file:modified}
in my url, which ensures if you change the file, or a new file is created, it treats that as a different file and processes it.
Do be careful to check how many files you might be processing because the idempotent buffer is limited by default (to 1000 records I think), so if you were potentially processing more than 1000 files at a time, it might "forget" it's already processed file 1, when file 1001 arrives, and try and reprocess file 1 again.

Where does pure-ftpd call its uploadscript?

I've been looking through pure-ftpd-1.0.42 source code:
https://github.com/jedisct1/pure-ftpd
Trying to find when it triggers:
https://github.com/jedisct1/pure-ftpd/blob/master/src/pure-uploadscript.c
i.e. when does it run the uploadscript after a file has been uploaded.
If you look in src/ftp_parser.c the dostor method is how a file starts the upload journey. Then it goes to ul_send then ul_handle_data but I get lost at this point. I never see when it says, okay, this file is now uploaded, time to call uploadscript. Can someone show me the line?
In the pureftpd_start() function in src/ftpd.c, pure-ftpd starts up and parses all of its command-line options. It also opens a pipe to the pure-uploadscript, if configured; here. Rather than invoking the upload script on each upload (and incurring the fork() and exec() overhead per-upload), pure-ftpd keeps the upload script process running separately, and sends it the uploaded file path via the pipe.
Knowing this, then, we look for where that pipe is written to, using the upload_pipe_push() function. Interestingly, that function is called here, by the displayrate() function, which is called by both dostor() and doretr() in the src/ftpd.c file.
Hope this helps!

Camel reads the file from FTP endppoint before the complete file is copied to the location

Hi I have a very simple route which reads a file from an FTP location. When I deploy into a service mix(Jboss Fuse) it reads the files as expected.
When I have a large file it reads this file before it has finished copying to the location.
How could I tackle this?
If the problem is that you read the file before the sender has finished sending it, you need to use the 'readlock' parameter with the 'rename' value. That's the only value for that parameter that works over FTP.
If the problem is that someone reads the file before you are finished sending it you need to use the 'tempPrefix' parameter. This will prefix the filename while still copying its content (so that consumers ignore it at that stage), and only rename to the final filename after the file is completely transfered.
The FTP component is an extention of the File component. You'll find more information about the 'tempPrefix' parameter here: http://camel.apache.org/file2.html

java me: How to clear a file

How to clear a file in J2ME so that it becomes empty (no content)?
All output streams (OutputStream, DataOutputStream, PrintStream...)
can only write() and add the content to the file while I see no way to delete a byte/bytes in a file.
I use Netbeams 7.0.1
Thanks for any help
Call your write() method like this:
.write((new String()).getBytes());
This will make your file empty.
You can also delete your file and create a new one. This will also results into an empty file having the same file name.

Resources