When using fopen(), Microsoft Visual Studio prints
warning C4996: 'fopen' was declared deprecated`
As reason is given:
This function or variable may be unsafe. Consider using fopen_s instead.
What is unsafe with fopen() that's more safe with fopen_s()?
How can fopen() be used in a safe way (if possible)?
I don't want to know how to suppress the warning - there are enough Stack Overflow articles that answer that question.
The Microsoft CRT implements the secure library enhancements described in C11 Annex K. Which is normative but not mandatory. fopen_s() is described in section K.3.5.2.1. Also covered by rule FIO06-C of the CERT institute.
At issue is that fopen() dates from simpler times when programmers could still assume that their program was the only one manipulating files. An assumption that has never really been true. It does not have a way to describe how access to the file by other processes is limited, CRT implementations traditionally opened the file without denying any access. Non-standard alternatives have been used to fix this problem, like _fsopen().
This has consequences if the file is opened for writing, another process can also open the file for writing and the file content will be hopelessly corrupted. If the file is opened for reading while another process is writing to it then the view of the file content is unpredictable.
fopen_s() solves these problems by denying all access if the file is opened for writing and only allowing read access when the file is opened for reading.
Related
Xcode's generic Kernel Extension requires file parsing.
For example, I want to read the contents of the A.txt file and save it as a variable. Just like you used FILE, fopen, EOF in c
As you can see, generic Kernel Extension can not include stdio.h, resulting in an error of use of undeclared identifier.
I am wondering if there is a way to parse a file in generic Kernel Extension like c.
(The following code can be used in Kernel Extension)
FILE *f;
char c;
int index = 0;
f = fopen(filepath, "rt");
while((c = fgetc(f)) != EOF){
fileContent[index] = c;
index++;
}
fileContent[index] = '\0';
It is certainly possible. You'll need to do the following:
Open the file with vnode_open(). This will turn your path into a vnode_t reference. You'll need a VFS authorisation context; you can obtain the current thread's context (i.e. open the file as the user in whose process's context the kernel is currently running) with vfs_context_create() if you don't already have one.
Perform I/O with vn_rdwr(). (Reads & writes use the same function, just pass UIO_READ or UIO_WRITE as the second argument.)
Close the file and drop references to the vnode with vnode_close(). Possibly dispose of a created VFS context using vfs_context_rele().
You'll want to look at the headerdocs for all of those functions, they're defined in <sys/vnode.h> in the Kernel.framework, and explaining every parameter exceeds the scope of a SO question/answer.
Note: As a commenter has already pointed out however, you'll want to make sure that opening files is really what needs to be done to solve whatever your problem is, particularly if you're newish to kernel programming. If at all unsure, I suggest you post a question along the lines of "I'm trying to do X, is reading the file in a kext really the best way forward?" where X is sufficiently high level, not "I need the contents of a file in the kernel" but why, and why a file specifically?
In various kernel execution contexts, file I/O may not be safe (i.e. may sometimes hang the system). If your kext loads early during boot, there might not be a file system yet. File I/O causes a lot to happen in the system, and can take a very long time in kernel terms - especially if you consider network file systems (including netboot environments!). If you're not careful, you might cause a bad user experience if the user is trying to eject a volume with a file your kext has open: the user has no way of resolving this, the OS can only suggest specific apps to close, it can't reach deep into your kext. Plus, there's the usual warnings about kernel programming in general: just because it can be done in the kernel, doesn't mean it should be. It's more the opposite: only if it can't be done any other way should it be done in a kext.
The GNU libc manual mentions that there are historical reasons that the data structure representing "streams" is called FILE.
After getting curious i've googled around and tried to look into it but I can't seem to find this fabulous tale.
Any ideas?
While I don't have a citation for this, it's likely that the historical reason for the creation of the term "stream" is standardization of the C language. FILE is the type that was always used with FILE * handles for stdio in C, but in order to express the specification for the stdio interfaces, it's necessary to be able to distinguish between a file (the actual storage object) and the handle for an open file, and "stream" seems to have been the word that was chosen.
Is there anything like a string file in stdio/string/stdlib ? I mean a special way to fopen a FILE stream, which actually directs the writes to an internal buffer and takes care of buffer allocation/reallocation ? After fclose, the text should be available as null-terminated char[] or similar.
I need to interface to legacy code that receives a FILE* as an argument and writes to it, and I'd prefer to avoid writing to a temporary disk file.
Other forms of storage could do instead of char[] (f.i. string), but a FILE* pointer must be available.
I am looking for an alternative to creating a temporary disk file.
fmemopen & open_memstream are in the POSIX 2008 standard, probably inspired by GNU libc string streams, and give in-memory FILE* streams.
See also this question quite similar to yours, and also that answer.
BTW, many operating systems have RAM based or virtual memory based filesystems (à la tmpfs)
If you are coding in C++11 (not in C) and perhaps for some earlier C++ standard you can of course use std::stringstream-s
So you could use open_memstream on Posix, and some other solution on Windows (just with #if _POSIX_C_SOURCE > 200809L per feature_test_macros(7) ...)
The C standard does not provide (yet) any in-memory FILE streams, so if you need them you have to code or use platform-specific functions.
Create the temporary file using CreateFile(... FILE_ATTRIBUTE_TEMPORARY, FILE_FLAG_DELETE_ON_CLOSE ...) and then convert the HANDLE to FILE*.
You said you didn't like a write to a temporary file, so these flags to CreateFile are a strong hint to Windows to keep the file in cache if possible. And if Windows would run of of RAM, even a char[] can end up in a swap file anyway.
What is the security hole in tmpfile and how does tmpfile_s solve it?
In this case, it appears to fall under the "Enhanced error reporting" category of upgrades to the Windows CRT. In this case, it basically means that it will return a status value and fill out a pre-given FILE pointer, rather than just returning a FILE pointer.
I doubt there was actually a security flaw with tmpfile, more that Microsoft were bringing the implementation of it to the same standards as other functions in their CRT without breaking API compatibility with a standard CRT, as described here: http://msdn.microsoft.com/en-us/library/8ef0s5kh.aspx.
The wording of the C99 standard seems a bit ambiguous regarding the behavior of the remove function.
In section 7.19.4.1 paragraph 2:
The remove function causes the file whose name is the string pointed to by filename
to be no longer accessible by that name. A subsequent attempt to open that file using that
name will fail, unless it is created anew.
Does the C99 standard guarantee that the remove function will delete the file on the filesystem, or could an implementation simply ignore the file -- leaving the file on filesystem, but just inaccessible to the current program via that filename-- for the remainder of the program?
I don't think you're guaranteed anything by the C standard, which says (N1570, 7.21.4.1 2):
The remove function causes the file whose name is the string pointed to by filename
to be no longer accessible by that name. A subsequent attempt to open that file using that
name will fail, unless it is created anew. If the file is open, the behavior of the remove
function is implementation-defined.
So, if you had a pathological implementation, it could be interpreted, I suppose, to mean that calling remove() merely has the effect of making the file invisible to this running instance of this program, but that would be, as I said, pathological.
However, all is not utterly stupid! The POSIX specification for remove() says,
If path does not name a directory, remove(path) shall be equivalent to unlink(path).
If path names a directory, remove(path) shall be equivalent to rmdir(path).
And the POSIX documentation for unlink() is pretty clear:
The unlink() function shall remove a link to a file.
Therefore, unless your implementation (a) Does not conform to POSIX requirements, and (b) is extremely pathological, you can be assured that the remove() function will actually try to delete the file, and will return 0 only if the file is actually deleted.
Of course, on most filesystems currently in use, filenames are decoupled from the actual files, so if you've got five links to an inode, that file's going to keep existing until you delete all five of them.
References:
The Open Group Base Specifications Issue 6, IEEE Std 1003.1, 2004 Edition
The Open Group Base Specifications Issue 7, IEEE Std 1003.1™, 2013 EditionNote:"IEEE Std 1003.1 2004 Edition" is "IEEE Std 1003.1-2001 with corrigenda incorporated". "IEEE Std 1003.1 2013 Edition" is "IEEE Std 1003.1-2008 with corrigendum incorporated".
The C99 standard does not guarantee anything.
The file could remain there for any of the reasons unlink(2) can fail. For example you don't have permission to do this.
Consult http://linux.die.net/man/2/unlink for examples what can all go wrong.
On Unix / Linux, there are several reasons for the file not to be removed:
You dont't have write permission on the file's directory (in that case, remove() will return ERROR, of course)
there is another hard link on the file. Then the file will remain on disk but only be accessible by the other path name(s)
the file is kept open by any process. In that case the directory entry is removed immediatly, so that no subsequent open() can access the file (or an appropriate call will create a new file), but the file itself will remain on disk as long as any process keeps it open.
Typically, that only unlinks the file from the file system. This means all the data that was in the file, is still there. Given enough experience or time, someone would be able to get that data back.
There are some options to not have the file be read again, ever. The *nix utility shred will do that. If you are looking to do it from within a program, open the file to write, and write nonsense data over what you are looking to 'remove'.