stdio's remove() not always deleting on time - c

For a particular piece of homework, I'm implementing a basic data storage system using sequential files under standard C, which cannot load more than 1 record at a time. So, the basic part is creating a new file where the results of whatever we do with the original records are stored. The previous file's renamed, and a new one under the working name is created. The code's compiled with MinGW 5.1.6 on Windows 7.
Problem is, this particular version of the code (I've got nearly-identical versions of this floating around my functions) doesn't always remove the old file, so the rename fails and hence the stored data gets wiped by the fopen().
FILE *archivo, *antiguo;
remove("IndiceNecesidades.old"); // This randomly fails to work in time.
rename("IndiceNecesidades.dat", "IndiceNecesidades.old"); // So rename() fails.
antiguo = fopen("IndiceNecesidades.old", "rb");
// But apparently it still gets deleted, since this turns out null (and I never find the .old in my working folder after the program's done).
archivo = fopen("IndiceNecesidades.dat", "wb"); // And here the data gets wiped.
Basically, anytime the .old previously exists, there's a chance it's not removed in time for the rename() to take effect successfully. No possible name conflicts both internally and externally.
The weird thing's that it's only with this particular file. Identical snippets except with the name changed to Necesidades.dat (which happen in 3 different functions) work perfectly fine.
// I'm yet to see this snippet fail.
FILE *antiguo, *archivo;
remove("Necesidades.old");
rename("Necesidades.dat", "Necesidades.old");
antiguo = fopen("Necesidades.old", "rb");
archivo = fopen("Necesidades.dat", "wb");
Any ideas on why would this happen, and/or how can I ensure the remove() command has taken effect by the time rename() is executed? (I thought of just using a while loop to force call remove() again so long as fopen() returns a non-null pointer, but that sounds like begging for a crash due to overflowing the OS with delete requests or something.)

So suddenly, after reading Scott's mention of permissions, I thought about "Permission Denied" and applied some Google. Turned out it's a pretty common, if obscure, error.
caf was right, it was in another piece of code. Namely, I had forgotten to fclose that same file in the function meant to show the contents. Since I wasn't tracking that particular detail, it appeared to be random.
Disclaimer: Weekly math assigments make for very little sleeptime. ¬¬

That sounds quite strange, and even more so when you say that the same code works OK with a different filename - I would strongly suspect a bug elsewhere in your code. However, you should be able to work around it by renaming the file you want to remove:
rename("IndiceNecesidades.old", "IndiceNecesidades.older");
remove("IndiceNecesidades.older");
rename("IndiceNecesidades.dat", "IndiceNecesidades.old");

It would probably be a good idea to check the remove() function for errors. man remove says that the function returns 0 on success and -1 on failure, setting errno to record the error. Try replacing the call with
if (remove("IndiceNecesidades.old") != 0){
perror("remove(\"IndiceNecesidades.old\") failed");
}
which should give an error message saying what failed.
Further, it doesn't appear that the remove is neccessary
man rename()
The rename() system call causes the
link named old to be renamed as new.
If new exists, it is first removed.
Both old and new must be of the same
type (that is, both must be either
directories or non-directories) and
must reside on the same file system.
The rename() system call guarantees
that an instance of new will always
exist, even if the system should crash
in the middle of the operation.
If the final component of old is a
symbolic link, the symbolic link is
renamed, not the file or directory to
which it points.
EPERM will be returned if:
[EPERM] The directory
containing old is marked sticky, and
neither the containing directory nor
old are owned by the effective user
ID.
[EPERM] The new file
exists, the directory containing new
is marked sticky, and neither the
containing directory nor new are owned
by the effec-
tive user ID.
so the next step would be to check you have permissions on the containing directory

Related

CopyFileEx() fails with ERROR_SHARING_VIOLATION for file created by CreateFile()

I want to create a temporary copy of a DLL file in the temp folder, and have it deleted on application exit. Due to unrelated reasons too long to explain, I cannot simply remove the file at the end of the function/script that creates it.
I tried using CreateFile() with FILE_FLAG_DELETE_ON_CLOSE, but when I try to copy the original file to this file, I get ERROR_SHARING_VIOLATION.
Here's my code:
BOOL CopySuccess = 0;
if ((_waccess(TempFilePath, 0)) == -1) {
printf("Temp copy \'%ls\' not found, creating copy now\n", TempFilePath);
CreateFileW(TempFilePath, (GENERIC_READ | GENERIC_WRITE), (FILE_SHARE_DELETE | FILE_SHARE_READ | FILE_SHARE_WRITE), NULL, CREATE_NEW, FILE_FLAG_DELETE_ON_CLOSE, NULL);
CopySuccess = CopyFileExW(OriFilePath, TempFilePath, NULL, NULL, FALSE, NULL);
if (!CopySuccess)
{
ErrorExit(TEXT("Copy dll to temp file failed"));
}
}
AFAIK, I used the correct flags in the CreateFile() call to enable shared access to the file.
What am I doing wrong/what is an alternative approach?
I need the logic to maintain this structure. Without going into details, for reasons that escape my control, this script will be run around 10 times per second, so I need a way to copy the file just once, then have it deleted once the application exits, due to error, ctrl-c event, or normal exit.
As a reply to the comments:
I tried writing the contents of the original file to the tempfile created with CreateFile(). This didn't work because the handle returned by CreateFile() is not valid to use as a library (library handles are of type HMODULE). Closing the handle then re-opening it not a possibility, as closing all handles to the file causes it to be deleted as per the FILE_FLAG_DELETE_ON_CLOSE flag.
I figured the issue would be on CopyFile()'s side. I didn't think of writing my own function, so instead I tackled the problem the following way:
There's one specific var that increases by a fixed amount in every iteration of the main script, so I wrote an if statement that would check:
If the copy of the dll already existed
If the current value of the variable was below the 2nd iteration value
If both conditions were met, the already existing copies of the dlls would be deleted. Similarly, the dlls are only attempted to be copied in the 1st iteration of the main script.
This is not an actual answer to the question, but a way to circumvent it. I'll give a try to writing my own version of CopyFile(). If I succeed and it behaves as I intend it to behave, I'll post the code and an explanation as an answer here. Thanks all!

How would I know that file is opened and it is saved after some writing operation using C code?

I have a set of configuration files (10 or more), and if user opens any of these file using any editor (e.g vim,vi,geany,qt,leafpad..). How would I come to know that which file is opened and if some writing process is done, then it is saved or not (using C code).
For the 1st part of your question, please refer e.g. to How to check if a file has been opened by another application in C++?
One way described there is to use a system tool like lsof and call this via a system() call.
For the 2nd part, about knowing whether a file has been modified, you will have to create a backup file to check against. Most editors already do that, but their naming scheme is different, so you might want to take care of that yourself. How to do that? Just automatically create a (hidden) file .mylogfile.txt if it does not exist by simply copying mylogfile.txt. If .mylogfile.txt exists, is having an older timestamp than mylogfile.txt, and differs in size and/or hash-value (using e.g. md5sum) your file was modified.
But before re-implementing this, take a look at How do I make my program watch for file modification in C++?

Failure to reliably overwrite file from VBA

I have a VBA program that should delete and overwrite a file, with the same name. Code sample is below. Everything worked fine until maybe several months ago, when the overwrite started behaving strangely despite no apparent change anywhere. At times, the file was not overwritten at all. I then tried deleting the file manually before doing the write from VBA, that also didn't work, instead it looked like the previous version in the recycle bin was being restored, rather than overwriting my new version.
When I manually delete the file in the target folder, and also delete all previous revs from the recycle bin, the write works fine.
Posting this following a lot of googling and other searching, however nothing describes my specific problem. Any suggestions on what's happening here / how to fix?
Excel 2013 under Win10. Nothing unusual about the platform, the excel setup or the file being written, plenty of disk space available.
fname = "C:\DatFolder\Incrdat.bin"
Kill fname ' remove previous file
fnum = FreeFile
Open fname For Binary Access Write As #fnum
.... do write
Close #fnum
Edit 14 May, following many trials.
OMG how can this be so difficult??? :-/. I've put error checks on all the file ops, none occur. I've also added a 1-sec wait between the Kill and the file create. What happens now every time, is following the delete and rewrite, there is a file there again, but it's the previous file with the old timestamp. ie. Somehow the file that has been deleted is just put back as it was previously.
When I step through the code in debug, it works fine.
Many thanks for all the suggestions, but none of them have worked. I'm giving up and redoing the whole damn thing in C++, I know that'll work.
One more thing, unfortunately the suggestions to rewrite with a different file name, or to a different folder, can't be used unfortunately.

permission denied for rename function in C on windows

I have an application developed in C. This application is supported across multiple platforms. There is one functionality where we are transferring files via file transfer protocol to different machine or to any other directory on local machine. I want to include a functionality where I can transfer the file with some temporary name and once the transfer is complete, I want to rename the file with the correct name (the actual file name).
I tried using simple rename() function. It works fine in Unix and Linux machines. But it does not work on Windows. It is giving me an error code of 13(Permission denied error).
First thing, I checked in msdn to know the functionality of rename if I have to grant some permissions to the file etc.
I granted full permissions to the file (lets say it is 777).
I read in few other posts that I should close the file descriptor before renaming the file. I did that too. It still gives the same error.
Few other posts mentioned about the owner of the file and that of the application. The application will run as a SYSTEM user. (But this should not affect the behavior, because I tried the same rename function in my application as follows:
This works fine from my application:
rename("C:/abc/aaa.txt","C:/abc/zzz.txt");
but
rename(My_path,"C:/abc/zzz.txt");
doesn't work, where My_path when printed displays C:/abc/test.txt.
How can I rename a file? I need it to work on multiple platforms.
Are there any other things I should be trying to make it work.?
I had this same problem, but the issue was slightly different. If I did the following sequence of function calls, I got "Permission Denied" when calling the rename function.
fopen
fwrite
rename
fclose
The solution was to close the file first, before doing the rename.
fopen
fwrite
fclose
rename
If
rename("C:/abc/aaa.txt","C:/abc/zzz.txt");
works but
rename(My_path,"C:/abc/zzz.txt");
does not, in the exact same spot in the program (i.e. replacing one line with another and making no changes), then there might be something wrong with the variable My_path. What is the type of this variable? If it is a char array (since this is C), is it terminated appropriately? And is it exactly equal to "C:/abc/aaa.txt"?
(I wish I could post this as a comment/clarification rather than as an answer but my rep isn't good enough :( )

Get `df` to show updated information on FreeBSD

I recently ran out of disk space on a drive on a FreeBSD server. I truncated the file that was causing problems but I'm not seeing the change reflected when running df. When I run du -d0 on the partition it shows the correct value. Is there any way to force this information to be updated? What is causing the output here to be different?
In BSD a directory entry is simply one of many references to the underlying file data (called an inode). When a file is deleted with the rm(1) command only the reference count is decreased. If the reference count is still positive, (e.g. the file has other directory entries due to symlinks) then the underlying file data is not removed.
Newer BSD users often don't realize that a program that has a file open is also holding a reference. The prevents the underlying file data from going away while the process is using it. When the process closes the file if the reference count falls to zero the file space is marked as available. This scheme is used to avoid the Microsoft Windows type issues where it won't let you delete a file because some unspecified program still has it open.
An easy way to observe this is to do the following
cp /bin/cat /tmp/cat-test
/tmp/cat-test &
rm /tmp/cat-test
Until the background process is terminated the file space used by /tmp/cat-test will remain allocated and unavailable as reported by df(1) but the du(1) command will not be able to account for it as it no longer has a filename.
Note that if the system should crash without the process closing the file then the file data will still be present but unreferenced, an fsck(8) run will be needed to recover the filesystem space.
Processes holding files open is one reason why the newsyslog(8) command sends signals to syslogd or other logging programs to inform them they should close and re-open their log files after it has rotated them.
Softupdates can also effect filesystem freespace as the actual inode space recovery can be deferred; the sync(8) command can be used to encourage this to happen sooner.
This probably centres on how you truncated the file. du and df report different things as this post on unix.com explains. Just because space is not used does not necessarily mean that it's free...
Does df --sync work?

Resources