I am working on a requirement related to downloading large size files through camel-ftp component.
Route definition is as below :
from("sftp://host:22?connectTimeout=30000&username=xxx&password=yyyy&localWorkDirectory=D:/templocation")
.to("file:///D:/mylocation");
I am looking for an answer to the below questions.
Does Camel SFPT supports resume functionality in case there is a server disconnect.I have observed that .inprogress file
gets deleted once SocketTimeout/IOException exception is thrown from underlying JSCH library. My expectation is that camel should re establish
the connection once it is available and resume downloading from the point where it left.
Parameters such as connectTimeout, timeout and soTimeout have no effect. In windows platform(WIN 7), if the server stays disconnected for
approximately 21 seconds, Camel deletes the .inprogress file. Is there any other parameter in camel FTP component that has to be set
to control consumer timeout. Issue would be if the file size is very large(1 GB or more) and server gets disconnected when more that
90% is downloaded.
Any help in this regard will be highly appreciated.
#ClausIbsen :
Thank you so much for your answer. I would really appreciate your feedback on point 2.
I went through Camel FTP component source code and found SftpOperations.retrieveFileToFileInLocalWorkDirectory
is the method where the functionality related to retrieving data from JSch library is implemented.
Code is such that any exception received from underlying library will cause the .inprogress file to get deleted
i,e channel.get(remoteName, os);. I investigated JSch library where they have a option of resume :
get(String src, OutputStream dst, SftpProgressMonitor monitor, int mode, long skip)
Downloads a file to an OutputStream.
I incorporated this API in retrieveFileToFileInLocalWorkDirectory method by tracking if there is any .inprogress file
and if exists, it's filesize.
if(fileSize>0)
{
channel.get(remoteName, os, progressMonitor, ChannelSftp.RESUME, fileSize );
}
else
{
channel.get(remoteName, os, progressMonitor);
}
ProgressMonitor implementation helps me to track if the download is complete or not.
fileSize=temp.length();
boolean isFileDownloadComplete=(fileSize==progressMonitor.getMax());
if(isFileDownloadComplete)
rename and move the file.
With the above implementation and commenting out the original file deletion behaviour, download resume functionality is working.
I am able to resume file download even though server disconnect .
I have one question here :
Do you foresee any implementation flaw here in the above solution.
Is there any functionality that is going to be impacted which I missed.
I would really appreciate your feedback.
Related
Fuse/Camel newbie here. I'm trying to automate a manual process where .done files are downloaded from an FTP host, then renamed "fileout.txt", and finally an AS/400 program is executed on that file.
However, the department hosting the AS/400 program doesn't have resources to update their programming. The solution I'm working toward is to have Camel download one file at a time, save it as "fileout.txt", then execute a JT400 program to process it. Individually those steps work but I'm left with one problem.
What I pray you, dear reader, can help me with is "How can I stop Camel after downloading just one file?
(since overwriting, appending, or downloading multiple files won't work for the following step)".
How can I stop Camel after downloading just one file?
You can set following parameters in FTP consumer
maxMessagesPerPoll=1 (Limit number of message to be download in single batch)
delay=500000 (Increase the time interval between each poll, so you have time to stop the route)
Then, your ftp route can trigger an asynchronous message (maybe wireTap component) to another route to trigger controlBus component to stop the ftp route by route id.
I'm trying to automate a manual process where .done files are downloaded from an FTP host, then renamed fileout.txt, and finally an AS/400 program is executed on that file
Other than stop/start your route, you may try pollEnrich component with FTP usage. Using pollEnrich, you can trigger FTP consumer once when needed if you know the target file name already.
We're experiencing a strange problem.
We have a file component monitoring a folder. This works perfectly if the path is either
a) myrelativepath - which is relative to the Karaf installation where the camel route is run; or
b) /tst/mypath - which reads from a folder from the root
If I set log level to DEBUG I see the logs of it polling based on my interval.
However, if I set the path to be:
/mnt/windowsshare - which is a mounted windows share.
I get nothing in the logs, I don't see the poll, and it doesn't pick up any files. apparently the route is started though.
Interestingly, I have another camel route which writes a file to that location (a subfolder called inbound) and it writes file with no problem.
Any ideas?
I can get perhaps more logs tomorrow, but this is only happening in this environment where we have a windows share. And the share seems to be fine.
For testing we have run Camel as root and as root on the commandline we have tested reading the files (via vi) and all is ok.
Any suggestions for things to look at?
Basically make sure you don't have too many files in the antexlcude...polling is logarithmic and a fraction more makes polling very slow.
Needs more code analysis and VM introspection to understand why.
We have an integration system based on Camel v2.16.1 that runs on a Jboss v6 Linux platform. There are multiple interfaces running simultaneously each with a different polling rate.
We are intermittently experiencing 'Cannot rename file' issue with Camel failing to backup to the 'done' folder successfully processed and transmitted files from the FTP source. Restarting the camel application fixes the issue.
Basically, at regular intervals triggered by a quartz scheduler, the route:
picks up files from a source via FTP,
processes them, smooks + xsl transformations
delivers the generated flat file to an endpoint via FTP.
If multiple files are read from the source directory, then all the files are appended together in a temporary file before being processed.
The Camel FTP configuration uses the following URL:
ftp://xxxx/export?antInclude=dsciord_*.dat&inProgressRepository=#warehouseIntegrationIdempotentRepository&preMove=in_progress_bpo/$simple{date:now:yyyyMMddHHmm}/$simple{file:name}&move=done&consumer.bridgeErrorHandler=true
read files dsciord_*.dat from /export directory
use custom inprogressRepository to store the read filename into a local db (this was done to prevent contention issue with a second cluster node, however, currently only a single node is live. This option is unnecessary and can be removed speeding up the process).
move files to an in_progress_bpo/201609061522 directory, where the subdirectory is created based on the date_timestamp.
move them to the in_progress_bpo/201609061522/done subdirectory once successfully processed.
In vast majority of cases the route works with no issues, however, sometimes the file(s) cannot be moved to the done folder (see error below). Even in this case, the route can sometimes continue successfully at the next polling cycle, however, in other cases the route enters a state when even if the quartz scheduler triggers the poll, the route fails to detect any files in the source /export directory even when there ARE files there.
org.apache.camel.component.file.GenericFileOperationFailedException: Cannot rename file: RemoteFile[in_progress_bpo/201609060502/dsciord_3605752.dat] to: RemoteFile[in_progress_bpo/201609060502/done/dsciord_3605752.dat]
Notes: We are using
a single instance of a ConsumerTemplate to handle our interfaces.
a custom inprogressRepository to store the file names read.
Obviously, there must be a system locking the source files and this is causing the Camel route to stop processing further files.
Any ideas/suggestions on debugging/resolving this issue would be greatly appreciated. The issues that I read through the camel-users forum seem to deal with Windows-related deployments, sometimes Smooks failing to close the input stream. I've check and we don't use the
org.milyn.templating.xslt.XslTemplateProcessor#bypass method where Smooks fails to close the underlying input stream.
Finally I have been able to reproduce/identify the issue.
Given that we are using a relative path to move the processed files into once successfully ftp-ed to the destination servers:
../../../u/4gl_upload/warehouse_integration_2/trs-server/export/in_progress_bpo/201609081030/done
However, for some reason instead of traversing the via correct path to move the processed files the camel consumer creates a new subdirectory tree starting from the current working directory and this could be quite long as follows. Hence the problem. It doesn’t know where it is and it doesn’t reset itself.
/u/4gl_upload/warehouse_integration_2/trs-server/u/4gl_upload/warehouse_integration_2/trs-server/export/in_progress_bpo/201609081030
This was reproduced with the option stepwise=false, which means it traverses the subdirectories in a single step instead of step wise.
Still don’t know what best solution is.
I am using camel technology for my file operation. My system is cluster environment.
Let say, I have 4 instances
Instance A
Instance B
Instance C
Instance D
Folders Structure
Input Folder: C:/app/input
Output Folder: C:/app/output
All the four instances will be pointing to Input folder location. As per, my business 8 files will be placed in the input folder and output will be consolidated file. here camel losing data when concurrently writing to output file.
Route:
from("file://C:/app/input")
.setHeader(Exchange.File_Name,simple("output.txt"))
.to("file://C:/app/output?fileExist=Append")
.end();
Kindly help me to resolve this issue. is there any thing like write lock in camel? to avoid concurrent file writer. Thanks in advance
You can use the doneFile option of the file component, see http://camel.apache.org/file2.html for more information.
Avoid reading files currently being written by another application
Beware the JDK File IO API is a bit limited in detecting whether another application is currently writing/copying a file. And the implementation can be different depending on OS platform as well. This could lead to that Camel thinks the file is not locked by another process and start consuming it. Therefore you have to do you own investigation what suites your environment. To help with this Camel provides different readLock options and doneFileName option that you can use. See also the section Consuming files from folders where others drop files directly.
I am currently working on a file processing service that looks at a fileshare, where files are uploaded to via FTP.
For scalability I've been asked to make this service to be able to be load balanced, so the service has to expect that other services on different machines may also be trying to process these files.
OK, so I thought I should be able achieve this by obtaining an exclusive lock for my process before processing a file, and skipping any files that may already be locked by another process.
The crux of this approach is shown below (I've left out the error handling for simplicity):
using(FileStream fs = File.Open(myFile, FileMode.Open, FileAccess.ReadWrite, (FileShare.Read | FileShare.Delete))
{
//Do work
}
Q1: My process now has a lock on this file. I thought this would mean I could then access the same file (without using the stream) and still have the correct access to it, but based on testing it seems I only have the benefits of the lock through the stream. Is this correct?
(For example, before I included FileShare.Delete, File.Delete(myFile) failed)
The above lock ultimately uses the 'Write' permission to determine which service has the file, but is intended to allow other processes to still Read the file. This is because the process that has the lock attempts to verify if the file is a valid zip file , which uses a third party library (Xceed.Zip). However this fails saying the file "is being used by another process". Using reflector I ultimately found the problematic call is:
stream = this.m_info.Open(FileMode.Open, FileAccess.Read, FileShare.Read);
Now I would have expected this to work as it only wants to read the file, but it fails. The reason appears to be outlined in a similar question. However, as this is a 3rd party API I can't change their code to use ReadWrite.
Q2: Is there a way I can correctly lock the file so it will not be picked up by the other services, but it can still be verified as a zip file using the external API?
I feel like there should be a 'correct' way to do this, but at the moment the best I can come up with is to lock the file, move it away from the shared directory, and then verify it at the new location.
If you're planning to reactively handle this situation by handling UnauthorizedAccessException I think you're making a serious mistake.
This can be handled by proactively renaming files. For example you can configure your service to only read files whose name is in the format 'Filename.YYYYMMDD.txt'. Prior to processing the file, you can rename it to 'Filename.YYYYMMDD.processing'. Then after processing the file you rename it to 'Filename.YYYYMMDD.done'.
You can even take it a step further by making another service that enqueues the filenames. This service will be a FileSystemWatcher that listens for FileAdd operations. Once it receives that event it proceeds to queueing the Filename to a global message queue. Then, each of your service will just be dequeueing filenames and no longer have to worry about concurrent access.
HTH