In my sceniro I am checking the file in a particluar folder and checking whether it exists or not. In case it exists i am doing some operations on it.
But my issue is that these files are getting generated by other process [which usally takes 20-30 min to generate the file], so for example at any instance when I checked the file in the folder and I find it. Now, I need to make sure that it is not in process right now so that I can continue my operation on that it. Below is the code snippet which I am using:
EDIT
My question is this that how can I identify that file is not in use (or is getting generated) by other process.
$Filesitems = Get-ChildItem -Path "D:\SomeFolder\Sub_Folder"
foreach ($objItem in $Filesitems)
{
if ($objItem.Name.Contains("Process"))
{
Write-Host $objItem.Name
}
}
I would probably try to use a 'lock'-file to avoid this behavior.
Ensure whenever the process works on the file, the process also generates an (empty) file.lock file and removes it when the process is finished working on the file.
In this scenario you can check whether a lockfile exists, and so you know that another process is still working on that particular file
Related
I want to create a temporary copy of a DLL file in the temp folder, and have it deleted on application exit. Due to unrelated reasons too long to explain, I cannot simply remove the file at the end of the function/script that creates it.
I tried using CreateFile() with FILE_FLAG_DELETE_ON_CLOSE, but when I try to copy the original file to this file, I get ERROR_SHARING_VIOLATION.
Here's my code:
BOOL CopySuccess = 0;
if ((_waccess(TempFilePath, 0)) == -1) {
printf("Temp copy \'%ls\' not found, creating copy now\n", TempFilePath);
CreateFileW(TempFilePath, (GENERIC_READ | GENERIC_WRITE), (FILE_SHARE_DELETE | FILE_SHARE_READ | FILE_SHARE_WRITE), NULL, CREATE_NEW, FILE_FLAG_DELETE_ON_CLOSE, NULL);
CopySuccess = CopyFileExW(OriFilePath, TempFilePath, NULL, NULL, FALSE, NULL);
if (!CopySuccess)
{
ErrorExit(TEXT("Copy dll to temp file failed"));
}
}
AFAIK, I used the correct flags in the CreateFile() call to enable shared access to the file.
What am I doing wrong/what is an alternative approach?
I need the logic to maintain this structure. Without going into details, for reasons that escape my control, this script will be run around 10 times per second, so I need a way to copy the file just once, then have it deleted once the application exits, due to error, ctrl-c event, or normal exit.
As a reply to the comments:
I tried writing the contents of the original file to the tempfile created with CreateFile(). This didn't work because the handle returned by CreateFile() is not valid to use as a library (library handles are of type HMODULE). Closing the handle then re-opening it not a possibility, as closing all handles to the file causes it to be deleted as per the FILE_FLAG_DELETE_ON_CLOSE flag.
I figured the issue would be on CopyFile()'s side. I didn't think of writing my own function, so instead I tackled the problem the following way:
There's one specific var that increases by a fixed amount in every iteration of the main script, so I wrote an if statement that would check:
If the copy of the dll already existed
If the current value of the variable was below the 2nd iteration value
If both conditions were met, the already existing copies of the dlls would be deleted. Similarly, the dlls are only attempted to be copied in the 1st iteration of the main script.
This is not an actual answer to the question, but a way to circumvent it. I'll give a try to writing my own version of CopyFile(). If I succeed and it behaves as I intend it to behave, I'll post the code and an explanation as an answer here. Thanks all!
I have a situation where I submitted jobs that have been running for five days but due to a bug introduced all the work could be lost. I made a 'system' call to compress the data file and then remove the original uncompressed file that could be as big as 4G. So I have this in the C code
strcpy(command,"data"); ////I should added a forward slash here "data/"
sprintf(command,"%scompress -c -i %s -o %s",command,name,out_name);
system(command);
remove(name); /////This is the problem
The bug is in the sprintf line, in which what I wanted to do was to call a program in data/compress, but due to the missing '/' the system command fails. And thus the data produced is not compressed AND then immediately the original file is DELETED leaving me with nothing! If it was compressed it would have been OK.
There are currently five running jobs in such a state. I need to divert this behavior somehow so that I don't lose five days work. I am thinking to create a fake script named 'datacompress' in the current directory to change the behavior of the running program. Can I do this or are there better options, if at all?
You can make datacompress a symbolic link to data/compress. Oops, this won't work unless the process's $PATH includes ..
Another option: remove the user's write permission to the directory containing name. This will cause the remove() function to fail.
If your system has Access Control Lists, remove the process's delete permission on the uncompressed file.
While you're trying to come up with a solution, you can suspend the process with:
kill -STOP <pid>
Create hard links (not symbolic links) to the data files:
ln datafile datafile.bkp
When the program removes the original datafile, the file's contents will remain under the .bkp filename.
And then fix the program to check error status of important things like the compress command.
I'm trying to create a robust recursive folder deleter function.
With normal directories works pretty fine.
The problem appears when I create a "hardcore" direcory, like:
C:\test\x\x\x\x\x\x\x\x\x\x\x\x\x\x\x\x\x\x\x\x\ ... \x\x\x
The length of this is around 25000 (less then the MSDN limit: 32,767). Basically I created this directory recursively until the CreatDirectory function failed.
Now, the strangest thing is, that my function is able to delete 2 directories then the FindFirstFile fails with 0x5:
\\?\C:\test\x\ ... \x\x\x\*.* < no error
\\?\C:\test\x\ ... \x\x\*.* < no error
\\?\C:\test\x\ ... \x\*.* < access denied
(I can rerun the it, the app is slowly chews up the folder, 2 by 2, probably until the path length gets pretty small)
I'm running FindFirstFile to check if the folder is empty.
Is there any sort of limitation that is less documented?
The FindFirstFile just simply doesn't work? (buggy?)
Am I missing some sort of NTFS permission thing?
Something else ...
EDIT:
IMPORTANT NOTE: If I run the program step by step slowly ... then nothing will fail.
You are probably experiencing something like a virus scanner, indexer or continuous-backup solution holding a handle to the directory. If the Indexing Service is configured to index that folder for example.
Trying to delete a folder or file which is open other than with FILE_SHARE_DELETE flag will cause ACCESS_DENIED.
To confirm this, use Process Monitor to see opens and closes on anything matching your path.
(Of course also confirm you called FindClose).
After doing tons of research and nor being able to find a solution to my problem i decided to post here on stackoverflow.
Well my problem is kind of unusual so I guess that's why I wasn't able to find any answer:
I have a program that is recording stuff to a file. Then I have another one that is responsible for transferring that file. Finally I have a third one that gets the file and processes it.
My problem is:
The file transfer program needs to send the file while it's still being recorded. The problem is that when the file transfer program reaches end of file on the file doesn't mean that the file actually is complete as it is still being recorded.
It would be nice to have something to check if the recorder has that file still open or if it already closed it to be able to judge if the end of file actually is a real end of file or if there simply aren't further data to be read yet.
Hope you can help me out with this one. Maybe you have another idea on how to solve this problem.
Thank you in advance.
GeKod
Simply put - you can't without using filesystem notification mechanisms, windows, linux and osx all have flavors of this. I forget how Windows does it off the top of my head, but linux has 'inotify' and osx has 'knotify'.
The easy way to handle this is, record to a tmp file, when the recording is done then move the file into the 'ready-to-transfer-the-file' directory, if you do this so that the files are on the same filesystem when you do the move it will be atomic and instant ensuring that any time your transfer utility 'sees' a new file, it'll be wholly formed and ready to go.
Or, just have your tmp files have no extension, then when it's done rename the file to an extension that the transfer agent is polling for.
Have you considered using stream interface between the recorder program and the one that grabs the recorded data/file? If you have access to a stream interface (say an OS/stack service) which also provides a reliable end of stream signal/primitive you could consider that to replace the file interface.
There is no functions/libraries available in C to do this. But a simple alternative is to rename the file once an activity is over. For example, recorder can open the file with name - file.record and once done with recording, it can rename the file.record to file.transfer and the transfer program should look for file.transfer to transfer and once the transfer is done, it can rename the file to file.read and the reader can read that and finally rename it to file.done!
you can check if file is open or not as following
FILE_NAME="filename"
FILE_OPEN=`lsof | grep $FILE_NAME`
// if [ -z $FILE_NAME ] ;then
// "File NOT open"
// else
// "File Open"
refer http://linux.about.com/library/cmd/blcmdl8_lsof.htm
I think an advisory lock will help. Since if one using the file which another program is working on it, the one will get blocked or get an error. but if you access it in force,the action is Okey, but the result is unpredictable, In order to maintain the consistency, all of the processes who want to access the file should obey the advisory lock rule. I think that will work.
When the file is closed then the lock is freed too.Other processes can try to hold the file.
For a particular piece of homework, I'm implementing a basic data storage system using sequential files under standard C, which cannot load more than 1 record at a time. So, the basic part is creating a new file where the results of whatever we do with the original records are stored. The previous file's renamed, and a new one under the working name is created. The code's compiled with MinGW 5.1.6 on Windows 7.
Problem is, this particular version of the code (I've got nearly-identical versions of this floating around my functions) doesn't always remove the old file, so the rename fails and hence the stored data gets wiped by the fopen().
FILE *archivo, *antiguo;
remove("IndiceNecesidades.old"); // This randomly fails to work in time.
rename("IndiceNecesidades.dat", "IndiceNecesidades.old"); // So rename() fails.
antiguo = fopen("IndiceNecesidades.old", "rb");
// But apparently it still gets deleted, since this turns out null (and I never find the .old in my working folder after the program's done).
archivo = fopen("IndiceNecesidades.dat", "wb"); // And here the data gets wiped.
Basically, anytime the .old previously exists, there's a chance it's not removed in time for the rename() to take effect successfully. No possible name conflicts both internally and externally.
The weird thing's that it's only with this particular file. Identical snippets except with the name changed to Necesidades.dat (which happen in 3 different functions) work perfectly fine.
// I'm yet to see this snippet fail.
FILE *antiguo, *archivo;
remove("Necesidades.old");
rename("Necesidades.dat", "Necesidades.old");
antiguo = fopen("Necesidades.old", "rb");
archivo = fopen("Necesidades.dat", "wb");
Any ideas on why would this happen, and/or how can I ensure the remove() command has taken effect by the time rename() is executed? (I thought of just using a while loop to force call remove() again so long as fopen() returns a non-null pointer, but that sounds like begging for a crash due to overflowing the OS with delete requests or something.)
So suddenly, after reading Scott's mention of permissions, I thought about "Permission Denied" and applied some Google. Turned out it's a pretty common, if obscure, error.
caf was right, it was in another piece of code. Namely, I had forgotten to fclose that same file in the function meant to show the contents. Since I wasn't tracking that particular detail, it appeared to be random.
Disclaimer: Weekly math assigments make for very little sleeptime. ¬¬
That sounds quite strange, and even more so when you say that the same code works OK with a different filename - I would strongly suspect a bug elsewhere in your code. However, you should be able to work around it by renaming the file you want to remove:
rename("IndiceNecesidades.old", "IndiceNecesidades.older");
remove("IndiceNecesidades.older");
rename("IndiceNecesidades.dat", "IndiceNecesidades.old");
It would probably be a good idea to check the remove() function for errors. man remove says that the function returns 0 on success and -1 on failure, setting errno to record the error. Try replacing the call with
if (remove("IndiceNecesidades.old") != 0){
perror("remove(\"IndiceNecesidades.old\") failed");
}
which should give an error message saying what failed.
Further, it doesn't appear that the remove is neccessary
man rename()
The rename() system call causes the
link named old to be renamed as new.
If new exists, it is first removed.
Both old and new must be of the same
type (that is, both must be either
directories or non-directories) and
must reside on the same file system.
The rename() system call guarantees
that an instance of new will always
exist, even if the system should crash
in the middle of the operation.
If the final component of old is a
symbolic link, the symbolic link is
renamed, not the file or directory to
which it points.
EPERM will be returned if:
[EPERM] The directory
containing old is marked sticky, and
neither the containing directory nor
old are owned by the effective user
ID.
[EPERM] The new file
exists, the directory containing new
is marked sticky, and neither the
containing directory nor new are owned
by the effec-
tive user ID.
so the next step would be to check you have permissions on the containing directory