Foreach Loop Container with Foreach File Enumerator option iterates all files twice - sql-server

I am using the SSIS Foreach Loop Container to iterate through files with a certain pattern on a network share.
I am encountering an kind of unreproducible malfunction of the Loop Container:
Sometimes the loop is executed twice. After all files were processed it starts over with the first file.
Have anyone encountered a similar bug?
Maybe not directly using SSIS but accessing files on a Windows share with some kind of technology?
Could this error relate to some network issues?
Thanks.

I found this to be the case whilst working with Excel files and using the *.xlsx wildcard to drive the foreach.
Once I put logging in place I noticed that when the Excel was opened it produced an excel file prefixed with ~$. This was picked up by the foreach loop.
So I used a trick similar to http://geekswithblogs.net/Compudicted/archive/2012/01/11/the-ssis-expression-wayndashskipping-an-unwanted-file.aspx to exclude files with a ~$ in the filename.

What error message (SSIS log / Eventvwr messages) do you get?
Similar to #Siva, I've not come across this, but some ideas you could use to try and diagnose. You may be doing some of these already, I've just written them down for completeness from my thought processes...
log all files processed. write a line to a log file/table pre-processing (each file), then post-process (each file). Keep the full path of each file. This is actually something we do as standard with our ETL implementations, as users are often coming back to us with questions about when/what has been loaded. This will allow you to see if files are actually being processed twice.
perhaps try moving each file after it is processed to a different directory. That will make it more difficult to have a file processed a second time and the problem may disappear. (If you are processing them from an area that is a "master" area (and so cant move them), consider copying the files to a "waiting" folder, then processing them and moving them to a "processed" folder)
#Siva's comment is interesting - look at the "traverse subfolders" check box.
check your eventvwr for odd network events, or application events (SQL Server restarting?)
use perf mon to see if there is anything odd happening in terms of network load on your server (a bit of a random idea!)
try running your whole process with files on a local disk instead of a network disk, if your mean time between failures is after running 10 times, then you could do this load locally 20-30 times and if you dont get an error it may be a network error

nothing helped - I implemented following workaround: script task in the foreach iterator which tracks all files. if a file was alread loaded a warning is fired and the file is not processed again. anyway, seems to be some network related problem...

Related

Moving file to another directory in ABAP

I have got a service running in a specific directory in 5-second-intervals which is picking up an XML file created in that directory sending it for some necessary authorization checks to another client and then requesting a response file.
My issue is that my Z_PROGRAM creating the XML file might take longer than 5 seconds as a result of the file's size. Therefore creating the file in that specific directory is not preferable. I thought about creating a new folder in that directory called "temporary" and creating the file inside that folder, then once I'm done with it, moving it back outside for the service to pick it up.
Is there any way to move files from one directory to another via ABAP code only?
Copying the file manually is not an option since the problem that I have during file creation still persists. I need 2 alternatives, one used for local directories and one for application server directories. Any ideas?
Generally, we create another empty file for completed files after the file creation process ends. Third parties must be firstly checked empty file is there. Example:
data file.csv
data file.ok
If you already completed your integration and it is not easy to make any change with third parties, I prefer using OS level file moving commands. Sample document here. You can use mv for Linux server and move for Windows. If your file is big, you will get same problem with OPEN DATASET concept. We have ARCHIVFILE_SERVER_TO_SERVER FM for moving files but it is also using OPEN DATASET.
there is no explicit move command in ABAP code that move or copy files between directories in application server.
there is two tips can be helpfull in your case. if you are writing big file you may seperate the logic behind collecting data and writing file. I would say don't execute transfer data inside your loop. instead collect you data into an internal table once you're done, loop over this internal table and write direclty strings without any delay you should be able to write a big files upp to several hundred of MB under 1 sec.
next tips is to not modify your program, or if you are using function modules to construct xml is, write to a temp directory after finishing, then have another program open you file on source directory by read dataset and directly write data to the new directory again just strings without interruptions.
you should be ok if you just write strings.
You can simply use System Call Commands to perform actions in Application Directory.
CALL 'SYSTEM'
ID 'COMMAND'
FIELD 'mv /usr/sap/temporary/File.xml
/usr/sap/final/file.xml'

Apache Camel GenericFileOperationFailedException: 'Cannot rename file' locks exchange

We have an integration system based on Camel v2.16.1 that runs on a Jboss v6 Linux platform. There are multiple interfaces running simultaneously each with a different polling rate.
We are intermittently experiencing 'Cannot rename file' issue with Camel failing to backup to the 'done' folder successfully processed and transmitted files from the FTP source. Restarting the camel application fixes the issue.
Basically, at regular intervals triggered by a quartz scheduler, the route:
picks up files from a source via FTP,
processes them, smooks + xsl transformations
delivers the generated flat file to an endpoint via FTP.
If multiple files are read from the source directory, then all the files are appended together in a temporary file before being processed.
The Camel FTP configuration uses the following URL:
ftp://xxxx/export?antInclude=dsciord_*.dat&inProgressRepository=#warehouseIntegrationIdempotentRepository&preMove=in_progress_bpo/$simple{date:now:yyyyMMddHHmm}/$simple{file:name}&move=done&consumer.bridgeErrorHandler=true
read files dsciord_*.dat from /export directory
use custom inprogressRepository to store the read filename into a local db (this was done to prevent contention issue with a second cluster node, however, currently only a single node is live. This option is unnecessary and can be removed speeding up the process).
move files to an in_progress_bpo/201609061522 directory, where the subdirectory is created based on the date_timestamp.
move them to the in_progress_bpo/201609061522/done subdirectory once successfully processed.
In vast majority of cases the route works with no issues, however, sometimes the file(s) cannot be moved to the done folder (see error below). Even in this case, the route can sometimes continue successfully at the next polling cycle, however, in other cases the route enters a state when even if the quartz scheduler triggers the poll, the route fails to detect any files in the source /export directory even when there ARE files there.
org.apache.camel.component.file.GenericFileOperationFailedException: Cannot rename file: RemoteFile[in_progress_bpo/201609060502/dsciord_3605752.dat] to: RemoteFile[in_progress_bpo/201609060502/done/dsciord_3605752.dat]
Notes: We are using
a single instance of a ConsumerTemplate to handle our interfaces.
a custom inprogressRepository to store the file names read.
Obviously, there must be a system locking the source files and this is causing the Camel route to stop processing further files.
Any ideas/suggestions on debugging/resolving this issue would be greatly appreciated. The issues that I read through the camel-users forum seem to deal with Windows-related deployments, sometimes Smooks failing to close the input stream. I've check and we don't use the
org.milyn.templating.xslt.XslTemplateProcessor#bypass method where Smooks fails to close the underlying input stream.
Finally I have been able to reproduce/identify the issue.
Given that we are using a relative path to move the processed files into once successfully ftp-ed to the destination servers:
../../../u/4gl_upload/warehouse_integration_2/trs-server/export/in_progress_bpo/201609081030/done
However, for some reason instead of traversing the via correct path to move the processed files the camel consumer creates a new subdirectory tree starting from the current working directory and this could be quite long as follows. Hence the problem. It doesn’t know where it is and it doesn’t reset itself.
/u/4gl_upload/warehouse_integration_2/trs-server/u/4gl_upload/warehouse_integration_2/trs-server/export/in_progress_bpo/201609081030
This was reproduced with the option stepwise=false, which means it traverses the subdirectories in a single step instead of step wise.
Still don’t know what best solution is.

Trying to find information on how to build a simple file version controll system

Im want to build a file system for non-tecks( dont care about old versions of the file so no merging or svn/git). The thougt is that a user should be able to download a file, in the same instance the file should be locked for other users. When the first user is done editing the, the file should then automaticaly upload to the server. When he closes the file, the lock should den be opend.
Is this even possible? Im thingking a sort of browser plugin, but I cant find anywone that has done the same thing. (besides microsoft, but who want to go down that road)
That would be: Sharepoint, Alfresco, (almost every WIKI), ...
Actually that is a basic feature of most document management systems. Even SVN has that already and IIRC you can set that up with mod_dav_svn without a line of code (considering configuration is not code).
Also the interesting question is, IMHO, not TheHappyCase where the described unit of work goes well but what about this*:
I Checkout 50 random documents you need
(get some popcorn and wait for your stresslevel to go up)
?????
I get bored and forget about it (everything still being checked out)
*: Points (1) and (2) may change order

Why doesn't checking out in TFS 2010 give me write permissions? It causes an exception in my project at the target of invocation

I just moved my code from subversion to TFS.
When I get latest version, I understand that I can't get write permissions.
However, when I check out and choose the option to take the exclusive lock, a check mark appears next to my files and I am able to edit them.
When I look in Windows explorer, however, some files are still marked "read only."
This becomes a problem when I try and run my application. For some reason, not having write permissions to everything gives me an exception at the target of invocation message (its a wpf project).
When I run the files out of version control, everything is fine. When I run the version under TFS, I get that exception--even when I've exclusively checked the files out.
Any idea what is going on here?
Sounds like quite a bit of confusion here. So i have a number of questions:
Did you specifically check out the files that are still marked "read only"? Or did you just check out other files which may be related to the ones marked "read only".
Did you use the Source Control window or the Solution Explorer when performing the checkout command? Did you select specific files or just the top level project file?
Are the files actually part of the project? or are they simply in the same folder but still under source control?
What exact error message are you getting?
What files are the problem? In other words have you checked in the compiled binaries from the BIN or OBJ folders?
TFS terminology is a little different than SVN.
"Get" represents an update in SVN terms.
"Checkin" commits your changes and both "Checkin" and "Checkout" are responsible for managing file locks.
"Checkin" releases the locks.
"Checkout" requests the locks.
There are 3 types of locking you can use or none at all. I would opt for using None as it's likely to cause the least issues, many of which can be resolved during a merge and a little bit of communication.

Sharing file locks

I am currently working on a file processing service that looks at a fileshare, where files are uploaded to via FTP.
For scalability I've been asked to make this service to be able to be load balanced, so the service has to expect that other services on different machines may also be trying to process these files.
OK, so I thought I should be able achieve this by obtaining an exclusive lock for my process before processing a file, and skipping any files that may already be locked by another process.
The crux of this approach is shown below (I've left out the error handling for simplicity):
using(FileStream fs = File.Open(myFile, FileMode.Open, FileAccess.ReadWrite, (FileShare.Read | FileShare.Delete))
{
//Do work
}
Q1: My process now has a lock on this file. I thought this would mean I could then access the same file (without using the stream) and still have the correct access to it, but based on testing it seems I only have the benefits of the lock through the stream. Is this correct?
(For example, before I included FileShare.Delete, File.Delete(myFile) failed)
The above lock ultimately uses the 'Write' permission to determine which service has the file, but is intended to allow other processes to still Read the file. This is because the process that has the lock attempts to verify if the file is a valid zip file , which uses a third party library (Xceed.Zip). However this fails saying the file "is being used by another process". Using reflector I ultimately found the problematic call is:
stream = this.m_info.Open(FileMode.Open, FileAccess.Read, FileShare.Read);
Now I would have expected this to work as it only wants to read the file, but it fails. The reason appears to be outlined in a similar question. However, as this is a 3rd party API I can't change their code to use ReadWrite.
Q2: Is there a way I can correctly lock the file so it will not be picked up by the other services, but it can still be verified as a zip file using the external API?
I feel like there should be a 'correct' way to do this, but at the moment the best I can come up with is to lock the file, move it away from the shared directory, and then verify it at the new location.
If you're planning to reactively handle this situation by handling UnauthorizedAccessException I think you're making a serious mistake.
This can be handled by proactively renaming files. For example you can configure your service to only read files whose name is in the format 'Filename.YYYYMMDD.txt'. Prior to processing the file, you can rename it to 'Filename.YYYYMMDD.processing'. Then after processing the file you rename it to 'Filename.YYYYMMDD.done'.
You can even take it a step further by making another service that enqueues the filenames. This service will be a FileSystemWatcher that listens for FileAdd operations. Once it receives that event it proceeds to queueing the Filename to a global message queue. Then, each of your service will just be dequeueing filenames and no longer have to worry about concurrent access.
HTH

Resources