Any reason to reopen as "write-append" after "read-only"? - c

I have a save file containing a stream of program events. The program may read the file and execute the events to restore a previous state (say between program invocations). After that any new events are appended to this file.
I could open the file once as read-write (fopen rw), not exposing the usage pattern.
But I wonder if there are any benefits of opening it as read-only at first (fopen r) and later re-opening it as append (freopen a). Would there be any appearent difference?

In your case there may not be any specific benefits, but primary use of freopen is to change the file associated with standard text stream (stdin, stdout, stderr). It may effect the readability of your code if you use if on normal files. In your case you first open in read-only mode, but if you are opening the stream as output there are few things about freopen that we need to keep in mind.
On Linux, freopen may also fail and set errno to EBUSY when the kernel structure for the old file descriptor was not initialized completely before freopen was called
freopen should not be used on output streams because it ignores errors while closing the old file descriptor.
Read about freopen and possible error conditions with fclose in GNU manual: https://www.gnu.org/software/libc/manual/html_node/Opening-Streams.html#Opening-Streams

No there are no specific benefits of opening the file as Read Only and then reopening in Append mode. If you require changes in files during program execution than better if you open it in as per mode.

Related

Can an already opened FILE handle reflect changes to the underlying file without re-opening it?

Assuming a plain text file, foo.txt, and two processes:
Process A, a shell script, overwrites the file in regular intervals
$ echo "example" > foo.txt
Process B, a C program, reads from the file in regular intervals
fopen("foo.txt", "r"); getline(buf, len, fp); fclose(fp);
In the C program, keeping the FILE* fp open after the initial fopen(), doing a rewind() and reading again does not seem to reflect the changes that have happened to the file in the meantime. Is the only way to see the updated contents by doing an fclose() and fopen() cycle, or is there a way to re-use the already opened FILE handle, yet reading the most recently written data?
For context, I'm simply trying to find the most efficient way of doing this.
On Unix/Linux, when you create a file with a name which already existed, the old file is not deleted or altered in any way. A new file is created and the directory is updated to point at the new file instead of the old one.
The old file will continue to exist as long as some directory entry points at it (Unix file systems allow the same file to be pointed to by multiple directories) or some program has an open file handle to the file, which is more relevant to your question.
As long as you don't close fp, it continues to refer to the original file, even if that file is no longer referenced by the filesystem. When you close fp, the file will get garbage collected automatically, and the next time you open foo.txt, you'll get a file descriptor for whatever file happens to have that name at that point in time.
In short, with the shell script you indicate, your C program must close and reopen the file in order to see the new contents.
Theoretically, it would be possible for the shell script to overwrite the same file without deleting it, but (a) that's tricky to get right; (b) it's prone to race conditions; and (c) closing and reopening the file is not that time-consuming. But if you did that, you would see the changes. [Note 1]
In particular, it's common (and easy) to append to an existing file, and if you have a shell script which does that, you can keep the file descriptor open and see the changes. However, in that case you would normally have already read to the end of the file before the new data was appended, and the standard C library treats the feof() indicator as sticky; once it gets set, you will continue to get an EOF indication from new reads. If you suspect that some process will be writing more data to the file, you should reset the EOF indication with fseek(fp, 0, SEEK_CUR); before retrying the read.
Notes
As #amadan points out in a comment, there are race conditions with echo text > foo.txt as well, although the window is a bit shorter. But you can definitely avoid race conditions by using the idiom echo text > temporary_file; mv -f temporary_file foo.txt, because the rename operation is atomic. Of course, that would definitely require you to close and reopen the file. But it's a good idea, particularly if the contents being written are long or critical, or if new files are created frequently.

Closing and reopening piped file descriptors for writing in c

I have a question please regarding what happens if I closed a file descriptor after writing into it ( e.g fd[1] after piping fd ), then opened it again to write. Will the data be overwritten and all the previous ones will be gone or it will keep on writing from the end point it stopped at after the first write?
I used the system call open() with the file descriptor and no other arguments.
If you close either of the file descriptors for a pipe, it can never be reopened. There is no name by which to reopen it. Even with /dev/fd file systems, once you close the file descriptor, the corresponding entry in the file system is removed — you're snookered.
Don't close a pipe if you might need to use it again.
Consider whether to make a duplicate of the pipe before closing; you can then either use the duplicate directly or duplicate the duplicate back to the original (pipe) file descriptor, but that's cheating; you didn't actually close all the references to the pipe's file descriptor. (Note that the process(es) at the other end of the pipe won't get an EOF indication because of the close — there's still an open file descriptor referring to the pipe.)

Read from a file opened with _O_TEMPORARY

Is it possible to write to a file that was created with _O_TEMPORARY and then later read the data that was written to that file? I've tried flushing the buffer, but that doesn't seem to work: subsequent _read() calls still return 0 bytes.
Obviously closing the file after writing and opening it again won't work since closing the file will delete it (that's what _O_TEMPORARY does), so what alternative is there?

Does every process have its stdin stdout stderr defined as Keyboard, Terminal etc?

Does every process have stdin, stdout and stderr associated to it to the Keyboard and Terminal?
I have a small program. I want to replace the keyboard input to a file called new.txt. How do I go about it?
FILE *file1
fopen("new.txt", "r")
close(0); // close the stdio
dup2(file1, 0);
Would this work? Now my stdio is redirected to the FILE?
No, not every process. But on operating systems that give you a command-line window to type in, a program started from that command line will have stdin connected to the keyboard, and stdout and stderr both going to the terminal.
If one program starts another, then often the second program's standard streams are connected to the first program in some way; for example, the first program may have an open descriptor through which it can send text and pretend that it's the "keyboard" for the second process. The details vary by operating system, of course.
In response to your question:
Would this work ?
No. dup2() takes two file descriptors (ints) while you're passing it a FILE * and an int. You can't mix file handles (FILE *s) and file descriptors (ints) like that.
You could use open instead of fopen to open your file as a file descriptor instead of a file handle, or you could use fileno to get the file descriptor from a file handle. Or you could use freopen to reopen the stdin file handle to a new file.
Note that file descriptors (ints) are part of POSIX operating systems and are only portable to other POSIX systems, while file handles (FILE *s) are part of the C standard and are portable everywhere. If you use file descriptors, you'll have to rewrite your code to make it work on Windows.

Detecting file deletion after fopen

im working in a code that detects changes in a file (a log file) then its process the changes with the help of fseek and ftell. but if the file get deleted and changed (with logrotate) the program stops but not dies, because it not detect more changes (even if the file is recreated). fseek dont show errors and eiter ftell.
how i can detect that file deletion? maybe a way to reopen the file with other FILE *var and comparing file descriptor. but how i can do that. ?
When a file gets deleted, it is not necessarily erased from your disk. In your case the program still has a handle to the old file. The old file handle will not get you any information about its deletion or replacement with another file.
An easy way to detect file deletion and recreation is using stat(2) and fstat(2). They give you a struct stat which contains the inode for the file. When a file is recreated (and still open) the files (old open and recreated) are different and thus the inodes are different. The inode field is st_ino. Yes, you need to poll this unless you wish to use Linux-features like inotify.
You can periodically close the file and open it again, that way you will open the newly created one. Files actually get deleted when there is no handle to the file (open file descriptor is a handle), you are still holding the old file.
On windows, you could set callbacks on the modifications of the FS. Here are details: http://msdn.microsoft.com/en-us/library/aa365261(VS.85).aspx

Resources