Performance of fopen vs stat - c

I'm writing several C programs for an embedded system where every bit of performance we can squeeze out will matter. Part of that is accessing log files. When determining if a file exists, is there any performance difference between using open / fopen, and stat ? I've been using stat on the assumption that it only has to do a quick check against the file system, whereas fopen would have to actually gain access to a file and manipulate internal data structures before returning. Is there any merit to this ?

stat is probably better, since it doesn't have to allocate resources for actually reading the file. You won't have to call fclose to release those resources, and you may also benefit from caching of recently checked files.
When it doubt, test it out. Time a big loop that checks for 1000 files using each method, with the appropriate mix of filenames that exist and don't exist.
If you have the source code for stat and fopen, you should be able to read through it and get an idea as to which will require more resources.

stat() does not not to create any user-side memory data structures. No matter how aggressive your caching policy, stat will not try pre-read the file's data. I think stat() is a safer bet.
How about access()?

If you want to squeeze out performance with respect to querying file existence and opening files, minimize the number of fopen and stat calls in general. The call to the file system should be way more expensive than anything the runtime does to translate it.

For only testing file existence, stat() would be preferred over fopen().
However, depending upon your setup, it could be worthwhile to use lstat() instead of stat().

Related

Is fgetpos/fsetpos any faster than fseek?

I know fgetpos/fsetpos are used for returning to a file position.
But if I accessed that position with fseek to begin with, is it more efficient to use fgetpos/fsetpos to return later, or just the same fseek again?
Is fgetpos/fsetpos any faster than fseek?
For general file positioning, fseek()/ftell() are limited to files sizes about LONG_MAX. fsetpos()/fgetpos() are designed to handle the file system's file sizes.
For large files, fseek()/ftell() are not an option. #Thomas Padron-McCarthy
When coding C99 onward, robust code uses fsetpos()/fgetpos() in lieu of a minor optimization that may of may not be present using the more limited fseek()/ftell().

Under what circumstances will fseek/ftell or fstat fail to get the size of a file?

I'm trying to access a file as a char array, via memory mapping it, or copying it into a buffer or whatever, but both of these need the size of the file, easy enough, thought I, just use fseek(file, 0, SEEK_END).
However: according to C++ Reference "Library implementations [of fseek] are allowed to not meaningfully support SEEK_END," Meaning that I can't get the size of a file using that method.
Next I tried fstat, which is less portable, but at least will provide a compile error rather than a runtime problem; but The Open Group notes that fstat does not need to provide a meaningful value for st_size.
So: has anyone actually come across a system where these methods do not work?
The notes about files not having valid sizes reported are there because, in Linux, there are many "files" for which "file size" is not a meaningful concept.
There are two main cases:
The file is not a regular file. In particular, pipes, sockets, and character device files are streams of data where data is consumed on read, and not put on disk, so a size does not make much sense.
The file system that the file resides on does not provide the file size. This is especially common in "virtual" filesystems, where the file contents are generated when read and, again, have no disk backing.
To expand, filesystems do not necessarily keep file contents on disk. Since the filesystem API is a convenient API for expressing hierarchal data, and there are many tools for operating on files, it sometimes makes sense to expose data as a file hierarchy. For example, /proc/ contains information about processes (such as open files and used memory) and /sys/ contains driver-specific information and options (anything from sensor sampling rates to LED colors). With FUSE (Filesystem in UserSpacE), you can program a filesystem to do pretty much anything, from SSHing into a remote computer to exposing Twitter as a filesystem.
For a lot of these filesystems, "file size" may not make much sense. For example, an LED driver might expose three files red, green, and blue. They can be read to get the current color or written to to change the color. Now, is it really worth implementing a file size for them, since they are merely settings in RAM, don't have any disk backing, and can't be removed? Not really.
In summary, files are not necessarily "things on disk". For many of the more advanced usages of files, "file size" either does not make sense or is not worth providing.

What is the best way to truncate the beginning of a file in C?

There are many similar questions, but nothing that answers this specifically after googling around quite a bit. Here goes:
Say we have a file (could be binary, and much bigger too):
abcdefghijklmnopqrztuvwxyz
what is the best way in C to "move" a right most portion of this file to the left, truncating the beginning of the file.. so, for example, "front truncating" 7 bytes would change the file on disk to be:
hijklmnopqrztuvwxyz
I must avoid temporary files, and would prefer not to use a large buffer to read the whole file into memory. One possible method I thought of is to use fopen with "rb+" flag, and constantly fseek back and forth reading and writing to copy bytes starting from offset to the beginning, then setEndOfFile to truncate at the end. That seems to be a lot of seeking (possibly inefficient).
Another way would be to fopen the same file twice, and use fgetc and fputc with the respective file pointers. Is this even possible?
If there are other ways, I'd love to read all of them.
You could mmap the file into memory and then memmove the contents. You would have to truncate the file separately.
You don't have to use an enormous buffer size, and the kernel is going to be doing the hard work for you, but yes, reading a buffer full from up the file and writing nearer the beginning is the way to do it if you can't afford to do the simpler job of create a new file, copy what you want into that file, and then copy the new (temporary) file over the old one. I wouldn't rule out the possibility that the approach of copying what you want to a new file and then either moving the new file in place of the old or copying the new over the old will be faster than the shuffling process you describe. If the number of bytes to be removed was a disk block size, rather than 7 bytes, the situation might be different, but probably not. The only disadvantage is that the copying approach requires more intermediate disk space.
Your outline approach will require the use of truncate() or ftruncate() to shorten the file to the proper length, assuming you are on a POSIX system. If you don't have truncate(), then you will need to do the copying.
Note that opening the file twice will work OK if you are careful not to clobber the file when opening for writing - using "r+b" mode with fopen(), or avoiding O_TRUNC with open().
If you are using Linux, since Kernel 3.15 you can use
#include <fcntl.h>
int fallocate(int fd, int mode, off_t offset, off_t len);
with the FALLOC_FL_COLLAPSE_RANGE flag.
http://manpages.ubuntu.com/manpages/disco/en/man2/fallocate.2.html
Note that not all file systems support it but most modern ones such as ext4 and xfs do.

Does fread fail for large files?

I have to analyze a 16 GB file. I am reading through the file sequentially using fread() and fseek(). Is it feasible? Will fread() work for such a large file?
You don't mention a language, so I'm going to assume C.
I don't see any problems with fread, but fseek and ftell may have issues.
Those functions use long int as the data type to hold the file position, rather than something intelligent like fpos_t or even size_t. This means that they can fail to work on a file over 2 GB, and can certainly fail on a 16 GB file.
You need to see how big long int is on your platform. If it's 64 bits, you're fine. If it's 32, you are likely to have problems when using ftell to measure distance from the start of the file.
Consider using fgetpos and fsetpos instead.
Thanks for the response. I figured out where I was going wrong. fseek() and ftell() do not work for files larger than 4GB. I used _fseeki64() and _ftelli64() and it is working fine now.
If implemented correctly this shouldn't be a problem. I assume by sequentially you mean you're looking at the file in discrete chunks and advancing your file pointer.
Check out http://www.computing.net/answers/programming/using-fread-with-a-large-file-/10254.html
It sounds like he was doing nearly the same thing as you.
It depends on what you want to do. If you want to read the whole 16GB of data in memory, then chances are that you'll run out of memory or application heap space.
Rather read the data chunk by chunk and do processing on those chunks (and free resources when done).
But, besides all this, decide which approach you want to do (using fread() or istream, etc.) and do some test cases to see which works better for you.
If you're on a POSIX-ish system, you'll need to make sure you've built your program with 64-bit file offset support. POSIX mandates (or at least allows, and most systems enforce this) the implementation to deny IO operations on files whose size don't fit in off_t, even if the only IO being performed is sequential with no seeking.
On Linux, this means you need to use -D_FILE_OFFSET_BITS=64 on the gcc command line.

Fopen failing for binary file

I have a huge binary file which is 2148181087 bytes (> 2gb)
I am trying to do fopen (file, "r") and it failed with
Can not open: xyz file (Value too
large to be stored in data type)
I read on the man page EOVERFLOW error is received when the file size > 2gb.
The weird thing is, I use a different input file which is also "almost" as big as the first file 2142884400 bytes (also >2gb), fopen works fine with this.
Is there any cutoff on the file size for fopen or is there any alternate way to solve this?
The cutoff is 2GB which, contrary to what you may think, is not 2,000,000,000 (2x10003).
It's 2,147,483,648 (2x10243). So your second file, which works, is actually less than 2GB in size).
2GB, in the computer world, is only 2,000,000,000 in the minds of hard drive manufacturers so they can say their disks are bigger than they really are :-) - it lets them say their disks are actually 2.1GB.
The "alternative way to solve this" depends on which operating system/library you are using.
For the GNU C library, you can use fopen64 as a replacement for fopen; it uses 64-bit file handles (there's also a macro to have fopen use 64-bit file handles).
For Windows, you'll probably have to switch to the Win32 file management API, with which you can use CreateFile.

Resources