Updating status of a growing file without close/reopen - c

Suppose I open() a file on a *nix machine and I hit eof. But when I open()ed it, it was still being written.
Is there a way to detect if the file has grown since open()-time? Or do I need to close() and re-open() to check?
Is there a way to read past the old eof without close()ing and reopen()ing?

Although not portable, on linux you can use the inotify system and watch for file close events for your file of interest. See http://www.ibm.com/developerworks/library/l-inotify/.
Note that just because the file was closed by the other process doesn't necessarily mean its size has increased. If you care about file size changes, use stat() to check for size changes before you close it. See How do you determine the size of a file in C?

Related

Should fsync be used after each fclose?

On a Linux (Ubuntu Platform) device I use a file to save mission critical data.
From time to time (once in about 10,000 cases), the file gets corrupted for unspecified reasons.
In particular, the file is truncated (instead of some kbyte it has only about 100 bytes).
Now, in the sequence of the software
the file is opened,
modified and
closed.
Immediately after that, the file might be opened again (4), and something else is being done.
Up to now I didn't notice, that fflush (which is called upon fclose) doesn't write to the file system, but only to an intermediate buffer. Could it be, that the time between 3) and 4) is too short and the change from 2) is not yet written to disc, so when I reopen with 4) I get a truncated file which, when it is closed again leads to permanent loss of those data?
Should I use fsync() in that case after each file write?
What do I have to consider for power outages? It is not unlikely that the data corruption is related to power down.
fwrite is writing to an internal buffer first, then sometimes (at fflush or fclose or when the buffer is full) calling the OS function write.
The OS is also doing some buffering and writes to the device might get delayed.
fsync is assuring that the OS is writing its buffers to the device.
In your case where you open-write-close you don't need to fsync. The OS knows which parts of the file are not yet written to the device. So if a second process wants to read the file the OS knows that it has the file content in memory and will not read the file content from the device.
Of course when thinking about power outage it might (depending on the circumstances) be a good idea to fsync to be sure that the file content is written to the device (which as Andrew points out, does not necessarily mean that the content is written to disc, because the device itself might do buffering).
Up to now I didn't notice, that fflush (which is called upon fclose) doesn't write to the file system, but only in an intermediate buffer. Could it be, that the time between 3) and 4) is too short and the change from 2) is not yet written to disc, so when I reopen with 4) I get a truncated file which, when it is closed again leads to permanent loss of those data?
No. A system that behaved that way would be unusable.
Should I use fsync() in that case after each file write?
No, that will just slow things down.
What do I have to consider for power outtages? It is not unlikeley, that the data corruption is related to power down.
Use a filesystem that's resistant to such corruption. Possibly even consider using a safer modification algorithm such as writing out a new version of the file with a different name, syncing, and then renaming it on top of the existing file.
If what you're doing is something like this:
FILE *f = fopen("filename", "w");
while(...) {
fwrite(data, n, m, f);
}
fclose(f);
Then what can happen is that another process can open the file while it's being written (between the open and write system calls that the C library runs behind the scenes, or between separate write calls). Then they would see only a partially written file.
The workaround to that is to write the file with another name, and rename() it over the actual filename. The downside is that you need double the amount of space.
If you are sure the opening of the file happens only after the write, then that cannot happen. But then there has to be some syncronization between the writer and reader so that the latter does not start reading too early.
fsync() tells the system to write the changes to the actual storage, which is a bit of an oddball within the other POSIX system calls, since I think nothing is specified of a system if it crashes, and that's the only situation where it matters if some data is stored on the actual storage, and not in some cache. Even with fsync() it's still possible for the storage hardware to cache the data, or for an unrelated corruption to trash the file system when the system crashes.
If you're happy to let the OS do its job, and don't need to think about crashes, you can ignore fsync() completely and just let the data be written when the OS sees fit. If you do care about crashes, you have to look more closely into what guarantees the filesystem makes (or doesn't). E.g. at least at some point, the ext* developers pretty much demanded applications to do an fsync() on the containing directory, too.

mkstemp() - is it safe to close descriptor and reopen it again?

When generating a temporary file name using mkstemp(), is it safe to immediately call close() on the file descriptor returned by mkstemp(), store the file name generated by mkstemp() somewhere and use it (at a much later time) to open the file again for writing a temporary file? Or will this temporary file name become available again as soon as I call close() on it?
The reason why I'm asking is that I'm wondering why mkstemp() returns a file descriptor at all. If it is safe to close() the descriptor immediately, why does it return a descriptor at all? mkstemp() could close it then on its own and just give me a file name.
No. In between the time when you use mkstemp() to create the file and the time when you reopen it, your adversary may have removed the file you created and put a symlink in its place pointing to somewhere else altogether. This is a TOCTOU — Time of Check, Time of Use — vulnerability which the use of mkstemp() largely avoids, provided you keep the file descriptor open.
Once you close the file descriptor, all bets are off in a sufficiently hostile environment.
Note that even if you keep the file descriptor open, an adversary might remove the file, or rename it, and then create their own file (symlink, directory) in its place. The file descriptor remains valid. You could use stat() to get the name information and the fstat() to get the file descriptor information, and if the two match (st_dev and st_ino fields), then you're probably still OK. If they differ, someone's got at the file — if you rename it, you may be renaming their file rather than the one you created.
While the file originally created by mkstemp() still exists, the name will not be regenerated. In general, successive calls to mkstemp() will create distinct names anyway, but the name is guaranteed to be unique at the moment of creation (see the O_EXCL flag for open()).
And just in case you're wondering, no — there isn't a way to associate a name with a file descriptor (there is no hypothetical int flink(int fd, const char *name) system call). There was a question about that on one of the Stack Exchange sites a while ago, and the answer was definitely negative, with references to the Linux Kernel mailing list and so on. One such question is Is it possible to recreate a file from an opened file descriptor?, but I think there was a more thorough version of the question too.
The mkstemp function specifically uses descriptors instead of filenames to avoid race conditions that are commonly associated with its predecessors such as mktemp. In fact, the "s" in "mkstemp" means "secure", because the race condition can be a source of vulnerability (e.g. if you use the temporary file to store JIT code, for example, and someone guessing/stomping the file before you open it could cause your application to load/run the provided code rather than the code that your program generates).
Once you close the descriptor, nothing prevents another application from writing a file with the same name, so please don't do that. You should retain the descriptor for as long as the temporary file is needed (and close the descriptor once the temporary file is no longer going to be used by your program).

What is the need for closing a file?

Why do we need to close a file that we opened? I know the problems like - it can't be accessed by another process if the current process doesn't close it. But why at the end of execution of a process the OS checks whether it is closed and closes it if opened. There must be a reason for that.
When you close a file the buffer is flushed and all you wrote on it it's persisted to the file. If you suddenly exit your program without flush (or close) your FILE * stream, you will probably lose your data.
Two words: Resource exhaustion. File handles, no matter platform, is a limited resource. If a process just opens file and never closes them, it will soon run out of file handles.
A file can certainly be accessed by another process while it is opened by one. Some semantics depend on the operating system. For example, in Unix, two or more processes may open a file concurrently to write. Almost all systems will allow readonly access to multiple processes.
You open a file to connect the byte stream into the process. You close the file to disconnect the two. When you write into the file, the file may not get modified right away due to buffering. That implies that the memory buffer of the file is modified but the change is not immediately reflected into the file on disk. The OS will reflect the changes in disk when it has enough data for performance reason. When you close the file, the OS will flush out the changes into the file on disk.
If you "get" a resource, it is good practice to release it when you have done.
I think it's not a good thing to trust what an O.S. would do when the process end: it might free resources or not. Common O.S. does it: they close files, free allocated memory, …
But if it's not part of the standard of the language you use (e.g. if it implements garbage collectors), then you shouldn't rely on that common behaviour.
Otherwise, the risk is that your application would lock/eat resources on some systems, even if it ended.
In this way it is just a good practise. You are not obliged to close files at the end.
Imagine you'll write a genial but messy method. You'll forget about the caveats and later find out, that this method may be used somewhere else. Then you'll try to use this method maybe in a loop and you'll find out that your programm is unexpectedly crashing. You'll have to go deeper in the code and fix that. So why won't you make the function clean at the beginning?
Do you have something against (or are you afraid of) closing files?
Here is a good explanation: http://www.cs.bu.edu/teaching/c/file-io/intro/
When writing into an output file, the information is hold in a buffer and closing the file is a way to be sure that everything is posted to the file.

How to know if a file is being copied?

I am currently trying to check wether the copy of a file from a directory to another is done.
I would like to know if the target file is still being copied.
So I would like to get the number of file descriptors openned on this file.
I use C langage and don't really find a way to resolve that problem.
If you have control of it, I would recommend using the copy-move idiom on the program doing the copying:
cp file1 otherdir/.file1.tmp
mv otherdir/.file1.tmp otherdir/file1
The mv just changes some filesystem entries and is atomic and very fast compared to the copy.
If you're able to open the file for writing, there's a good chance that the OS has finished the copy and has released its lock on it. Different operating systems may behave differently for this, however.
Another approach is to open both the source and destination files for reading and compare their sizes. If they're of identical size, the copy has very likely finished. You can use fseek() and ftell() to determine the size of a file in C:
fseek(fp, 0L, SEEK_END);
sz = ftell(fp);
In linux, try the lsof command, which lists all of the open files on your system.
edit 1: The only C language feature that comes to mind is the fstat function. You might be able to use that with the struct's st_mtime (last modification time) field - once that value stops changing (for, say, a period of 10 seconds), then you could assume that file copy operation has stopped.
edit 2: also, on linux, you could traverse /proc/[pid]/fd to see which files are open. The files in there are symlinks, but C's readlink() function could tell you its path, so you could see whether it is still open. Using getpid(), you would know the process ID of your program (if you are doing a file copy from within your program) to know where to look in /proc.
I think your basic mistake is trying to synchronize a C program with a shell tool/external program that's not intended for synchronization. If you have some degree of control over the program/script doing the copying, you should modify it to perform advisory locking of some sort (preferably fcntl-based) on the target file. Then your other program can simply block on acquiring the lock.
If you don't have any control over the program performing the copy, the only solutions depend on non-portable hacks like lsof or Linux inotify API.
(This answer makes the big, big assumption that this will be running on Linux.)
The C source code of lsof, a tool that tells which programs currently have an open file descriptor to a specific file, is freely available. However, just to warn you, I couldn't make any sense out of it. There are references to reading kernel memory, so to me it's either voodoo or black magic.
That said, nothing prevents you from running lsof through your own program. Running third-party programs from your own program is normally something you try to avoid for several reasons, like security (if a rogue user changes lsof for a malicious program, it will run with your program's privileges, with potentially catastrophic consequences) but inspecting the lsof source code, I came to the conclusion that there's no public API to determine which program has which file open. If you're not afraid of people changing programs in /usr/sbin, you might consider this.
int isOpen(const char* file)
{
char* command;
// BE AWARE THAT THIS WILL NOT WORK IF THE FILE NAME CONTAINS A DOUBLE QUOTE
// OR IF IT CAN SOMEHOW BE ALTERED THROUGH SHELL EXPANSION
// you should either try to fix it yourself, or use a function of the `exec`
// family that won't trigger shell expansion.
// It would be an EXTREMELY BAD idea to call `lsof` without an absolute path
// since it could result in another program being run. If this is not where
// `lsof` resides on your system, change it to the appropriate absolute path.
asprintf(&command, "/usr/sbin/lsof \"%s\"", file);
int result = system(command);
free(command);
return result;
}
If you also need to know which program has your file open (presumably cp?), you can use popen to read the output of lsof in a similar fashion. popen descriptors behave like fopen descriptors, so all you need to do is fread them and see if you can find your program's name. On my machine, lsof output looks like this:
$ lsof document.pdf
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
SomeApp 873 felix txt REG 14,3 303260 5165763 document.pdf
As poundifdef mentioned, the fstat() function can give you the current modification time. But fstat also gives you the size of the file.
Back in the dim dark ages of C when I was monitoring files being copied by various programs I had no control over I always:
Waited until the target file size was >= the source size, and
Waited until the target modification time was at least N seconds older than the current time. N being a number such a 5, and set larger if experience showed that was necessary. Yes 5 seconds seems extreme, but it is safe.
If you don't know what the target file is then the only real choice you have is #2, but user a larger N to allow for the worse case network and local CPU delays, with a healthy safety factor.
using boost libs will solve the issue
boost::filesystem::fstream fileStream(filePath, std::ios_base::in | std::ios_base::binary);
if(fileStream.is_open())
//not getting copied
else
//Wait, the file is getting copied

Is it ‘safe’ to remove() open file?

I think about adding possibility of using same the filename for both input and output file to my program, so that it will replace the input file.
As the processed file may be quite large, I think that best solution would to be first open the file, then remove it and create a new one, i.e. like that:
/* input == output in this case */
FILE *inf = fopen(input, "r");
remove(output);
FILE *outf = fopen(output, "w");
(of course, with error handling added)
I am aware that not all systems are going to allow me to remove open file and that's acceptable as long as remove() is going to fail in that case.
I am worried though if there isn't any system which will allow me to remove that open file and then fail to read its' contents.
C99 standard specifies behavior in that case as ‘implementation-defined’; SUS doesn't even mention the case.
What is your opinion/experience? Do I have to worry? Should I avoid such solutions?
EDIT: Please note this isn't supposed to be some mainline feature but rather ‘last resort’ in the case user specifies same filename as both input and output file.
EDIT: Ok, one more question then: is it possible that in this particular case the solution proposed by me is able to do more evil than just opening the output file write-only (i.e. like above but without the remove() call).
No, it's not safe. It may work on your file system, but fail on others. Or it may intermittently fail. It really depends on your operating system AND file system. For an in depth look at Solaris, see this article on file rotation.
Take a look at GNU sed's '--in-place' option. This option works by writing the output to a temporary file, and then copying over the original. This is the only safe, compatible method.
You should also consider that your program could fail at any time, due to a power outage or the process being killed. If this occurs, then your original file will be lost. Additionally, for file systems which do have reference counting, your not saving any space, over the temp file solution, as both files have to exist on disk until the input file is closed.
If the files are huge, and space is at premium, and developer time is cheap, you may be able to open a single for read/write, and ensure that your write pointer does not advance beyond your read pointer.
All systems that I'm aware of that let you remove open files implement some form of reference-counting for file nodes. So, removing a file removes the directory entry, but the file node itself still has one reference from open file handle. In such an implementation, removing a file obviously won't affect the ability to keep reading it, and I find it hard to imagine any other reasonable way to implement this behavior.
I've always got this to work on Linux/Unix. Never on Windows, OS/2, or (shudder) DOS. Any other platforms you are concerned about?
This behaviour actually is useful in using temporary diskspace - open the file for read/write, and immediately delete it. It gets cleaned up automatically on program exit (for any reason, including power-outage), and makes it much harder (but not impossible) for others to monitor it (/proc can give clues, if you have read access to that process).

Resources