in my app I'd like to open a temp file with FILE_FLAG_DELETE_ON_CLOSE. however there are some cases where the temp file needs to be kept and is quite large
I'd like to remove the FILE_FLAG_DELETE_ON_CLOSE attribute on an opened handle? is this possible? copying the contents of the file or renaming isnt quite what I want, I'd like to remove the attribute. this is due to how i'm transacting some writes, in my app closing the handle will open me up to a race condition
You could do it the other way around.
Open the file first, specifying FILE_SHARE_DELETE.
Then, when you need to close the file, open it again with FILE_DELETE_ON_CLOSE, then close both handles. Or, just close it, and it won't be deleted.
No, it's not possible once you've done the create. A dangerous idea might start with the statement, "That flag is only relevant to the handle the link was opened on." So creating a new link might "fix" it. I haven't thought about that more than five seconds, but it's the only slightly clever thing coming to me at midnight.
Related
I have a VBA program that should delete and overwrite a file, with the same name. Code sample is below. Everything worked fine until maybe several months ago, when the overwrite started behaving strangely despite no apparent change anywhere. At times, the file was not overwritten at all. I then tried deleting the file manually before doing the write from VBA, that also didn't work, instead it looked like the previous version in the recycle bin was being restored, rather than overwriting my new version.
When I manually delete the file in the target folder, and also delete all previous revs from the recycle bin, the write works fine.
Posting this following a lot of googling and other searching, however nothing describes my specific problem. Any suggestions on what's happening here / how to fix?
Excel 2013 under Win10. Nothing unusual about the platform, the excel setup or the file being written, plenty of disk space available.
fname = "C:\DatFolder\Incrdat.bin"
Kill fname ' remove previous file
fnum = FreeFile
Open fname For Binary Access Write As #fnum
.... do write
Close #fnum
Edit 14 May, following many trials.
OMG how can this be so difficult??? :-/. I've put error checks on all the file ops, none occur. I've also added a 1-sec wait between the Kill and the file create. What happens now every time, is following the delete and rewrite, there is a file there again, but it's the previous file with the old timestamp. ie. Somehow the file that has been deleted is just put back as it was previously.
When I step through the code in debug, it works fine.
Many thanks for all the suggestions, but none of them have worked. I'm giving up and redoing the whole damn thing in C++, I know that'll work.
One more thing, unfortunately the suggestions to rewrite with a different file name, or to a different folder, can't be used unfortunately.
Because of my slightly obsessive personality, I've been losing most of my productive time to a single little problem.
I recently switched from Mac OS X Tiger to Yosemite (yes, it's a fairly large leap). I didn't think AppleScript had changed that much, but I encountered a problem I don't remember having in the old days. I had the following code, but with a valid filepath:
set my_filepath to (* replace with string of POSIX filepath, because typing
colons was too much work *)
set my_file to open for access POSIX file my_filepath with write permission
The rest of the code had an error which I resolved fairly easily, but because the error stopped the script before the close access command, and of course AppleScript left the file reference open. So when I tried to run the script again, I was informed of a syntax error: the file is already open. This was to be expected.
I ran into a problem trying to close the reference: no matter what I did, I received an error message stating that the file wasn't open. I tried close access POSIX file (* filepath string again *), close access file (* whatever that AppleScript filepath format is called *), et cetera. Eventually I solved the problem by restarting my computer, but that's not exactly an elegant solution. If no other solution presents itself, then so be it; however, for intellectual and practical reasons, I am not satisfied with rebooting to close access. Does anyone have insights regarding this issue?
I suspect I've overlooked something glaringly obvious.
Edit: Wait, no, my switch wasn't directly from Tiger; I had an intermediate stage in Snow Leopard, but I didn't do much scripting then. I have no idea if this is relevant.
Agreed that restarting is probably the easiest solution. One other idea though is the unix utility "lsof" to get a list of all open files. It returns a rather large list so you can combine that with "grep" to filter it for you. So next time try this from the Terminal and see if you get a result...
lsof +fg | grep -i 'filename'
If you get a result you will get a process id (PID) and you could potentially kill/quit the process which is holding the file open, and thus close the file. I never tried it for this situation but it might work.
Have you ever had the Trash refuse to empty because it says a file is open? That's when I use this approach and it works most of the time. I actually made an application called What's Keeping Me (found here) to help people with this one problem and it uses this code as the basis for the app. Maybe it will work in this situation too.
Good luck.
When I've had this problem, it's generally sufficient to quit the Script editor and reopen it; a full restart of the machine is likely excessive. If you're running this from the Script Menu rather than Script Editor, you might try turning off the Script Menu (from Script Editor) and turning it back on again. The point is that files are held by processes, and if you quit the process it should release any lingering files pointers.
I've gotten into the habit, when I use open for access, of using try blocks to catch file errors. e.g.:
set filepath to "/some/posix/path"
try
set fp to open for access filepath
on error errstr number errnom
try
close access filepath
set fp to open for access filepath
on error errstr number errnom
display dialog errnum & ": " & errstr
end try
end try
This will try to open the file, try to close it and reopen it if it encounters and error, and report the error if it runs into more problems.
An alternative (and what I usually do) is that you can also comment out the open for access line and just add in a close access my_file to fix it.
After doing tons of research and nor being able to find a solution to my problem i decided to post here on stackoverflow.
Well my problem is kind of unusual so I guess that's why I wasn't able to find any answer:
I have a program that is recording stuff to a file. Then I have another one that is responsible for transferring that file. Finally I have a third one that gets the file and processes it.
My problem is:
The file transfer program needs to send the file while it's still being recorded. The problem is that when the file transfer program reaches end of file on the file doesn't mean that the file actually is complete as it is still being recorded.
It would be nice to have something to check if the recorder has that file still open or if it already closed it to be able to judge if the end of file actually is a real end of file or if there simply aren't further data to be read yet.
Hope you can help me out with this one. Maybe you have another idea on how to solve this problem.
Thank you in advance.
GeKod
Simply put - you can't without using filesystem notification mechanisms, windows, linux and osx all have flavors of this. I forget how Windows does it off the top of my head, but linux has 'inotify' and osx has 'knotify'.
The easy way to handle this is, record to a tmp file, when the recording is done then move the file into the 'ready-to-transfer-the-file' directory, if you do this so that the files are on the same filesystem when you do the move it will be atomic and instant ensuring that any time your transfer utility 'sees' a new file, it'll be wholly formed and ready to go.
Or, just have your tmp files have no extension, then when it's done rename the file to an extension that the transfer agent is polling for.
Have you considered using stream interface between the recorder program and the one that grabs the recorded data/file? If you have access to a stream interface (say an OS/stack service) which also provides a reliable end of stream signal/primitive you could consider that to replace the file interface.
There is no functions/libraries available in C to do this. But a simple alternative is to rename the file once an activity is over. For example, recorder can open the file with name - file.record and once done with recording, it can rename the file.record to file.transfer and the transfer program should look for file.transfer to transfer and once the transfer is done, it can rename the file to file.read and the reader can read that and finally rename it to file.done!
you can check if file is open or not as following
FILE_NAME="filename"
FILE_OPEN=`lsof | grep $FILE_NAME`
// if [ -z $FILE_NAME ] ;then
// "File NOT open"
// else
// "File Open"
refer http://linux.about.com/library/cmd/blcmdl8_lsof.htm
I think an advisory lock will help. Since if one using the file which another program is working on it, the one will get blocked or get an error. but if you access it in force,the action is Okey, but the result is unpredictable, In order to maintain the consistency, all of the processes who want to access the file should obey the advisory lock rule. I think that will work.
When the file is closed then the lock is freed too.Other processes can try to hold the file.
I want to append data to a file in /tmp.
If the file doesn't exist I want to create it
I don't care if someone else owns the file. The data is not secret.
I do not want someone to be able to race-condition this into writing somewhere else, or to another file.
What is the best way to do this?
Here's my thought:
fd = open("/tmp/some-benchmark-data.txt", O_APPEND | O_CREAT | O_NOFOLLOW | O_WRONLY, 0644);
fstat(fd, &st);
if (st.st_nlink != 1) {
HARD LINK ATTACK!
}
Problem with this: Someone can link the file to some short-lived file of mine, so that /tmp/some-benchmark-data.txt is the same as /tmp/tmpfileXXXXXX which another script of mine is using (and opened properly using O_EXCL and all that). My benchmark data is then appended to this /tmp/tmpfileXXXXXX file, while it's still being used.
If my other script happened to open its tempfile, then delete it, then use it; then the contents of that file would be corrupted by my benchmark data. This other script would then have to delete its file between the open() and the fstat() of the above code.
So in other words:
This script Dr.Evil My other script or program
open(fn2, O_EXCL | O_CREAT | O_RDWR)
link(fn1,fn2)
open(fn1, ...)
unlink(fn2)
fstat(..)=>link is 1
write(...)
close(...)
write(...)
seek(0, ...)
read(...) => (maybe) WRONG DATA!
And therefore the above solution does not work. There are quite possibly other attacks.
What's the right way? Besides not using a world-writable directory.
Edit:
In order to protect against the result that the evil user creates the file with his/her ownership and permissions, or just wrong permissions (by hard linking your file and then removing the original, or hardlinking a short-lived file of yours) I can check the ownership and permission bits after the nlink check.
There would be no security issue, but would also prevent surprises. Worst case is that I get some of my own data (from another file) at the beginning of the file copied from some other file of mine.
Edit 2:
I think it's almost impossible to protect against someone hard-linking the name to a file that's opened, deleted and then used. Examples of this is EXE packers, which sometimes even execute the deleted file via /proc/pid/fd-num. Racing with this would cause the execution of the packed program to fail. lsof could probably find if someone else has the inode opened, but it seems to be more trouble than it's worth.
Whatever you do, you'll generally get a race condition where someone else creates a link, then removes it by the time your fstat() system call executes.
You've not said exactly what you're trying to prevent. There are certainly kernel patches which prevent making (hard or symbolic) links to files you don't own in world-writable directories (or sticky directories).
Putting it in a non world-writable directory seems to be the right thing to do.
SELinux, which seems to be the standard enhanced security linux, may be able to configure policy to forbid users to do bad things which break your app.
In general, if you're running as root, don't create files in /tmp. Another possibility is to use setfsuid() to set your filesystem uid to someone else, then if the file isn't writable by that user, the operation will simply fail.
Short of what you just illustrated, the only other thing I've tried ended up almost equally racey and more expensive, establishing inotify watches on /tmp prior to creating the file, which allows for catching the event of a hardlink in some instances.
However, its still really racey and inefficient, as you would also need to complete a breadth first search of /tmp, at least up to the level that you want to create the file.
There (to my knowledge) is no "sure" way to avoid this kind of race, other than not using word writeable directories. What are the consequences of someone intercepting your i/o via hard link .. would they get anything useful or just make your application exhibit undefined behavior?
In what situations does Windows allow you to overwrite an open file? Is that ever allowed? This includes renaming a different file to the same name as an open file.
If you look at the documentation for CreateFile(), there is this dwShareMode parameter. This can determine what another process can do with that file while it's open.
Specifying FILE_SHARE_READ lets another process open the file for reading. There's FILE_SHARE_WRITE, which means that another process can write to it. There's also FILE_SHARE_DELETE, which allows delete and (IIRC) also rename.
If someone opened the file without FILE_SHARE_WRITE and you open the file for write access, you'll get ERROR_SHARING_VIOLATION. Otherwise you should be able to write to it.
Unfortunately if one process comes along and doesn't set sharing flags to allow something you need, you're pretty much out of luck. Although you might be able to use MoveFileEx() with the option MOVEFILE_DELAY_UNTIL_REBOOT. But I'm not sure if that works; I don't know much about that call except that it exists. :-)