I'm having trouble wrapping my head around files in C, specifically scope and duration. Say I create a file using
fopen("random.dat", "w");
How long does this file exist for? Does it get deleted once my program is finished running, or is it somehow reset? If I reopen the file further down in my code, only this time with the "r" reading argument, or "a", will I have conflicting streams since I'm opening a file that is already technically opened?
It's a normal file, just like all the other files on your computer. It exists until something deletes it, and its contents stay the same until something modifies it. It's not automatically deleted or "reset" when the program finishes. (C would be useless as a programming language if it couldn't save data to files that last longer than the program.)
However, since you're opening the file with the "w" option, the file will be truncated (reset to zero length) if it already exists — effectively, fopen deletes the existing file and creates a new empty one. That means that if you run your program a second time, the output from the first run will be replaced with the output from the second.
The effect of opening the same file more than once at the same time is platform-specific. On Unix/Linux it should work fine, but on Windows it may fail (though I haven't checked). But if you close the file (e.g. with fclose) before opening it again, that should work properly on any system.
The term file scope is used during compilation of a C program. It has nothing to do with something during execution.
Actually, the term is missleading; a better phrase would be compilation unit scope. It describes the visibility of names (variables, functions, structs, ... ) defined outside of a block (statement), i.e. at the outermost level.
For files opened during program execution, they are open actually until closed explicitly, independent from the program structure. However, as you required an object holding a reference to the file, that does restrict visibility to where you have access to this reference (FILE * for the stdlib file-functions), either by scope, or by explicitly passing it to functions.
A normal file opened/written/closed will dwefinitively not stop existing after the program closes or its reference goes out of scope (how could you store data persistently?), but only if explicitly deleted/unlinked or the filesystem itself is deleted (e.g. for Linux tempfs, which only exists until the OS is shut down). This is called lifetime, btw.
Related
This question already has answers here:
How to create a temporary directory in C in Linux?
(2 answers)
Closed 3 years ago.
I'm writing tests for a library that need to create a directories to test some functional it should provide. I did some research and found that there is a library function:
#include <stdio.h>
char *tmpnam(char *s);
And it is possible to call it with NULL to unique path. The problem is the linker warns me as follows:
warning: the use of `tmpnam' is dangerous, better use `mkstemp'
Also as suggested in this answer to use the function. But this hardcoding /tmp in the beginning looks strage. Also checking the environment variables TMP, TMPDIR, etc looks complicated.
Maybe there is some POSIX function which checks theses variables for me? Also is there any other pitfalls of using tmpnam except shared static buffer and race conditions?
The tmpnam() function doesn't create a directory; it generates a file name that didn't exist at somewhere about the time it was invoked, but which may exist by the time you try to use it with mkdir(), which does create directories. There is typically a plethora of related functions for doing roughly the same job, but they're different on each platform.
POSIX does provide mkdtemp() and mkstemp() — the former creates a directory, the latter a file; the same page documents both — where you specify the template to the function. That leaves you in charge of the directory within which the directory or file is created.
With both mkstemp() and mkdtemp(), the directory containing the new file or directory must already exist.
One of the primary problems with using tmpnam() is that you have essentially no control over where the file is created or what the filename looks like. Almost all the other functions give you some measure of control. Not being thread-safe is usually not a major issue — you can provide a buffer that will be used, making it thread-safe.
I have one question of temporary file open in C program.
I know there is FOPEN_MAX in stdio.h. As far as I know, FOPEN_MAX is the number of files(not temporary) can be opened simultaneously in C program. But, If I make temporary file using 'tmpfile()', does the number of temporary files included in FOPEN_MAX ?
Thanks in advance.
It is not written explicitly, but it seems the limitation is the same, no matter if file is temporary or not.
https://www.opennet.ru/man.shtml?topic=tmpfile&category=3&russian=5
See error code for tmpfile():
EMFILE
{FOPEN_MAX} streams are currently open in the calling process.
Xcode's generic Kernel Extension requires file parsing.
For example, I want to read the contents of the A.txt file and save it as a variable. Just like you used FILE, fopen, EOF in c
As you can see, generic Kernel Extension can not include stdio.h, resulting in an error of use of undeclared identifier.
I am wondering if there is a way to parse a file in generic Kernel Extension like c.
(The following code can be used in Kernel Extension)
FILE *f;
char c;
int index = 0;
f = fopen(filepath, "rt");
while((c = fgetc(f)) != EOF){
fileContent[index] = c;
index++;
}
fileContent[index] = '\0';
It is certainly possible. You'll need to do the following:
Open the file with vnode_open(). This will turn your path into a vnode_t reference. You'll need a VFS authorisation context; you can obtain the current thread's context (i.e. open the file as the user in whose process's context the kernel is currently running) with vfs_context_create() if you don't already have one.
Perform I/O with vn_rdwr(). (Reads & writes use the same function, just pass UIO_READ or UIO_WRITE as the second argument.)
Close the file and drop references to the vnode with vnode_close(). Possibly dispose of a created VFS context using vfs_context_rele().
You'll want to look at the headerdocs for all of those functions, they're defined in <sys/vnode.h> in the Kernel.framework, and explaining every parameter exceeds the scope of a SO question/answer.
Note: As a commenter has already pointed out however, you'll want to make sure that opening files is really what needs to be done to solve whatever your problem is, particularly if you're newish to kernel programming. If at all unsure, I suggest you post a question along the lines of "I'm trying to do X, is reading the file in a kext really the best way forward?" where X is sufficiently high level, not "I need the contents of a file in the kernel" but why, and why a file specifically?
In various kernel execution contexts, file I/O may not be safe (i.e. may sometimes hang the system). If your kext loads early during boot, there might not be a file system yet. File I/O causes a lot to happen in the system, and can take a very long time in kernel terms - especially if you consider network file systems (including netboot environments!). If you're not careful, you might cause a bad user experience if the user is trying to eject a volume with a file your kext has open: the user has no way of resolving this, the OS can only suggest specific apps to close, it can't reach deep into your kext. Plus, there's the usual warnings about kernel programming in general: just because it can be done in the kernel, doesn't mean it should be. It's more the opposite: only if it can't be done any other way should it be done in a kext.
The wording of the C99 standard seems a bit ambiguous regarding the behavior of the remove function.
In section 7.19.4.1 paragraph 2:
The remove function causes the file whose name is the string pointed to by filename
to be no longer accessible by that name. A subsequent attempt to open that file using that
name will fail, unless it is created anew.
Does the C99 standard guarantee that the remove function will delete the file on the filesystem, or could an implementation simply ignore the file -- leaving the file on filesystem, but just inaccessible to the current program via that filename-- for the remainder of the program?
I don't think you're guaranteed anything by the C standard, which says (N1570, 7.21.4.1 2):
The remove function causes the file whose name is the string pointed to by filename
to be no longer accessible by that name. A subsequent attempt to open that file using that
name will fail, unless it is created anew. If the file is open, the behavior of the remove
function is implementation-defined.
So, if you had a pathological implementation, it could be interpreted, I suppose, to mean that calling remove() merely has the effect of making the file invisible to this running instance of this program, but that would be, as I said, pathological.
However, all is not utterly stupid! The POSIX specification for remove() says,
If path does not name a directory, remove(path) shall be equivalent to unlink(path).
If path names a directory, remove(path) shall be equivalent to rmdir(path).
And the POSIX documentation for unlink() is pretty clear:
The unlink() function shall remove a link to a file.
Therefore, unless your implementation (a) Does not conform to POSIX requirements, and (b) is extremely pathological, you can be assured that the remove() function will actually try to delete the file, and will return 0 only if the file is actually deleted.
Of course, on most filesystems currently in use, filenames are decoupled from the actual files, so if you've got five links to an inode, that file's going to keep existing until you delete all five of them.
References:
The Open Group Base Specifications Issue 6, IEEE Std 1003.1, 2004 Edition
The Open Group Base Specifications Issue 7, IEEE Std 1003.1™, 2013 EditionNote:"IEEE Std 1003.1 2004 Edition" is "IEEE Std 1003.1-2001 with corrigenda incorporated". "IEEE Std 1003.1 2013 Edition" is "IEEE Std 1003.1-2008 with corrigendum incorporated".
The C99 standard does not guarantee anything.
The file could remain there for any of the reasons unlink(2) can fail. For example you don't have permission to do this.
Consult http://linux.die.net/man/2/unlink for examples what can all go wrong.
On Unix / Linux, there are several reasons for the file not to be removed:
You dont't have write permission on the file's directory (in that case, remove() will return ERROR, of course)
there is another hard link on the file. Then the file will remain on disk but only be accessible by the other path name(s)
the file is kept open by any process. In that case the directory entry is removed immediatly, so that no subsequent open() can access the file (or an appropriate call will create a new file), but the file itself will remain on disk as long as any process keeps it open.
Typically, that only unlinks the file from the file system. This means all the data that was in the file, is still there. Given enough experience or time, someone would be able to get that data back.
There are some options to not have the file be read again, ever. The *nix utility shred will do that. If you are looking to do it from within a program, open the file to write, and write nonsense data over what you are looking to 'remove'.
Is it safe to read directory entries via readdir() or scandir() while files are being created or deleted in this directory? Should I prefer one over the other?
EDIT: When I say "safe" I mean entries returned by these functions are valid and can be operated without crashing the program.
Thanks.
It depends by what you mean as "safe". They are safe in the sense that they should not crash your program. However, if you are creating/deleting files as you are reading/scanning that directory, the set of files you get back might not be up-to-date.
When reading/scanning a directory for directory entries, the file pointer (a directory is just a special type of file), moves forward. However, depending upon the file system, there may be nothing to prevent new files from being created in an empty directory entry slot behind your file pointer. Consequently, newly added directory entries may not be immediately detected by readdir()/scandir(). Similar reasoning applies for file deletion / directory entry removal.
Hope this helps.
What's your definition of safety? You won't crash the system, and readdir/scandir won't crash your program. Although they might give you data that is immediately out of date.
The usual semantics for reading a directory are that if you read the directory from beginning to end, you will see all of the files that didn't change during that time exactly once, and you will see files that were created or deleted during that time at most once.
On UNIX-like systems readdir() and scandir() are library functions implemented on top of the same underlying system call (getdents() in Linux, getdirentries() in BSD). So there shouldn't be much difference in their behavior in this regard. I think readdir() is a bit more standard, and therefore will be more portable.