Reading file using fread in C - c

I lack formal knowledge in Operating systems and C. My questions are as follows.
When I try to read first single byte of a file using fread in C, does the entire disk block containing that byte is brought into memory or just the byte?
If entire block is brought into memory, what happens on reading
second byte since the block containing that byte is already in
memory?.
Is there significance in reading the file in size of disk blocks?
Where is the read file block kept in memory?

Here's my answers
More than 1 block, default caching is 64k. setvbuffer can change that.
On the second read, there's no I/O. The data is read from the disk cache.
No, a file is ussuly smaller than it's disk space. You'll get an error reading past the file size even if you're within the actual disk space size.
It's part of the FILE structure. This is implementation (compiler) specific so don't touch it.
The above caching is used by the C runtime library not the OS. The OS may or may not have disk caching and is a separate mechanism.

Related

Memory Mapped I/O in Unix

I am unable to understand how files are managed in memory mapped I/O. As normal If we open a file using open or fopen, it returns fd or
file pointer respectively. After this open where the file resides for processing. It is in memory(copy of the file which is in hard disk) or not? If it
is not in memory where the data is fetch by consequent read or write system call or It fetchs data from the hard disk for each time of calling read or write.
Otherwise the copy of the file is stored in memory and the file is accessed by process for furthur manipulation and once the process is completed the file is copied to hard disk. In the above concepts
which scenario is worked ?
The following is the definition given for memory mapped i/o in Advanced Programming in Unix Environment(2nd Edition) book:
Memory-mapped I/O lets us map a file on disk into a buffer in memory so that, when we fetch bytes from the buffer, the corresponding bytes of the file are read. Similarly, when we store data in the buffer, the corresponding bytes are automatically written to the file. This lets us perform I/O without using read or write.
what is mapping a file into memory? And here, they defined the memory is placed in between stack and heap. In this memory, what
type of data is present after mapping a file. It contains copy of the file or the address of the file which resides in hard disk. And
how the above scenario becomes true.
Does anyone explain the working mechanism of memory mapped I/O and mmap functionality?
Normally when you open a file, the system sets up some bookkeeping structures (metadata) but does not need to read any part of the actual data of the file. When you call read(), the system loads a chunk of the file into (virtual) memory which you allocated for the purpose.
When you memory-map a file, the system again sets up bookkeeping, and also sets up a (virtual) memory "mapping" which means a range of valid addresses which, if used, will reflect reads (or writes) of the underlying file. It does not mean the entire file needs to be read at once, because it can be "paged in" on demand, i.e. the system can give you an address range to use, then wait for you to actually use it before loading any data there. This "page faulting" is supported by a hardware device called the Memory Management Unit, or MMU. The same system is used when you run an executable file--the system can simply map it into virtual memory and read pages (chunks) from disk only as needed.
It is in memory(copy of the file which is in hard disk) or not?
According to Computer Programming and Utilization, When you open file with fopen its content are loaded into memory. (Partially or wholly).
If it is not in memory where the data is fetch by consequent read or
write system call
When you fwrite some data, it is eventually copied into the kernel which will then write it to disk (or wherever) after buffering. In general, no part of a file needs to be loaded in order to write.
what is mapping a file into memory?
For more refer here
In this memory, what type of data is present after mapping a file. It
contains copy of the file or the address of the file which resides in
hard disk.
A memory-mapped file is a segment of virtual memory which has been assigned a direct byte-for-byte correlation with some portion of a file or file-like resource.Refer this
It is possible to mmap a file to a region of memory. When this is done, the file can be accessed just like an array in the program.This is more efficient than read or write, as only the regions of the file that a program actually accesses are loaded. Accesses to not-yet-loaded parts of the mmapped region are handled in the same way as swapped out pages.
After this open where the file resides for processing. It is in memory(copy of the file which is in hard disk) or not?
On the disk. It may also be partly or completely in memory if the operating system does a read-ahead, but that isn't detectable by you. You still have to issue reads to get data from the file.
If it is not in memory where the data is fetch by consequent read or write system call
From the disk.
or It fetchs data from the hard disk for each time of calling read or write.
In effect, but you also have to consider the effect of any caching.
Otherwise the copy of the file is stored in memory and the file is accessed by process for furthur manipulation and once the process is completed the file is copied to hard disk.
No. The file behaves as though it is all on the disk.
And here, they defined the memory is placed in between stack and heap.
Not in what you quoted.
In this memory, what type of data is present after mapping a file.
The data in the file. The question 'what type of data' doesn't make sense. Data is data.
It contains copy of the file or the address of the file which resides in hard disk.
It effectively contains a copy of the file.
And how the above scenario becomes true.
Via virtual memory. Too broad to cover here.

How does one write files to disk, sequentially, in C?

I want to write a program that writes data as one contiguous block of data to disk, so that when I read that data back from the disk, I can just read one long series of bytes without stopping. Are there any references I can be directed to regarding this issue?
I am essentially asking whether or not it is possible to write data for multiple files contiguously and read past an EOF, or many, to retrieve the data written.
I am aware of fwrite and fopen, I just want to be sure that the data being written to disk is contiguous.
It depends on what the underlying filesystem is, as this is filesystem-dependent. You'll want to look at extents, which are a contiguous area of storage reserved for a file.
On Windows you can open an unformatted volume with CreateFile and then WriteFile a contiguous block of data. It won't be a file, but you will be able to read it back as you stated.
According to this NTFS tries to allocate contiguous space if possible, your chances are lower when appending though.

reading data from filesystem vs compiling the data directly into program

I have a file (10-20MB) containing data, where each line is a single piece of data.
I have a C program that reads the file from the filesystem, and then based on command line input, it reads each line of the file, does a calculation on each line to determine if that line should be returned, and then return a subset of the data.
Assume that the program does an fread and reads the entire file into memory at the beginning, and then parses it directly from memory.
Would the program execute faster if, instead of reading it from the filesystem, I compiled the data into the program directly, by creating an array such as the following?
char *dataArray[] = {"data1", "data2", "data3"....};
Since the OS needs to read the entire binary from the filesystem, my gut feeling is that the execution time of both techniques would be similar, since reading from the filesystem would be the high order bit. However, would anyone have more definitive ideas on this?
Defining everything as a program literal will certainly be faster.
You do not need the relatively slow "open" call for the data file and you don't need to move the data from the buffer to your storage.
This was a common optimization circa. 1970, and every programming/coding style book since then stongly recommends you do not do this. The actual performance increase is minimal and what you gain in performance you lose in maintainability and flexibility.
Should you want a quick maintainable optimisation for this type of problem then look at the "mmap" call which makes the buffer directly available to your program and minimises data movement.
I doubt the difference in execution time will be significant, but from a memory utilization standpoint, putting the data in the executable (and qualifying it const appropriately) will make a big difference.
If you read 10-20 megs of data from a file into memory allocated (e.g. via malloc) in your program, the data initially exists in two places in memory: the filesystem cache, and your program's private memory. The former copy can be discarded if memory is tight, but the latter occupies physical memory or swap permanently until it's freed.
If on the other hand the 10-20 megs of data are part of your program's image (in the executable file), the data will be demand-paged, and can be discarded whenever needed because the OS knows it can reload the pages if it needs them again.

File system block size

What is the significance of the file system block size? If my filesystem block size is set at, say 8K, does that mean that all read/write I/O will happen at size 8K? So if my application wants to read say 16 bytes at offset 4097 then a 4K block starting from offset 4096 will be read?
How do writes work in this case? Suppose I want to write say 64 bytes.
You are right. The block size is the unit of work for the file system. Every read and write is done in full multiples of the block size.
The block size is also the smallest size on disk a file can have. If you have a 16 byte Block size,then a file with 16 bytes size occupies a full block on disk.
The book "Practical file system design" states:
Block: The smallest unit writable by a disk or file system. Everything a
file system does is composed of operations done on blocks. A file system
block is always the same size as or larger (in integer multiples) than the
disk block size.
Normally when you have to deal with files in programming you should use Stream abstraction.
I/O operations through code are often reads and writes to streams; reading and writing from and to streams, can be buffered so that chunks of file can be read or written.
Block size on fs refers to mapping disk surface; minor the size of the single block major the number of blocks (and so the elements in the table that keeps information on allocation of files).
So OS's so can map file on disk discretely based on block size and have a smaller "map of files".
As I know this doesn't affect stream abstraction in API's of programming language.

When should I use mmap for file access?

POSIX environments provide at least two ways of accessing files. There's the standard system calls open(), read(), write(), and friends, but there's also the option of using mmap() to map the file into virtual memory.
When is it preferable to use one over the other? What're their individual advantages that merit including two interfaces?
mmap is great if you have multiple processes accessing data in a read only fashion from the same file, which is common in the kind of server systems I write. mmap allows all those processes to share the same physical memory pages, saving a lot of memory.
mmap also allows the operating system to optimize paging operations. For example, consider two programs; program A which reads in a 1MB file into a buffer creating with malloc, and program B which mmaps the 1MB file into memory. If the operating system has to swap part of A's memory out, it must write the contents of the buffer to swap before it can reuse the memory. In B's case any unmodified mmap'd pages can be reused immediately because the OS knows how to restore them from the existing file they were mmap'd from. (The OS can detect which pages are unmodified by initially marking writable mmap'd pages as read only and catching seg faults, similar to Copy on Write strategy).
mmap is also useful for inter process communication. You can mmap a file as read / write in the processes that need to communicate and then use synchronization primitives in the mmap'd region (this is what the MAP_HASSEMAPHORE flag is for).
One place mmap can be awkward is if you need to work with very large files on a 32 bit machine. This is because mmap has to find a contiguous block of addresses in your process's address space that is large enough to fit the entire range of the file being mapped. This can become a problem if your address space becomes fragmented, where you might have 2 GB of address space free, but no individual range of it can fit a 1 GB file mapping. In this case you may have to map the file in smaller chunks than you would like to make it fit.
Another potential awkwardness with mmap as a replacement for read / write is that you have to start your mapping on offsets of the page size. If you just want to get some data at offset X you will need to fixup that offset so it's compatible with mmap.
And finally, read / write are the only way you can work with some types of files. mmap can't be used on things like pipes and ttys.
One area where I found mmap() to not be an advantage was when reading small files (under 16K). The overhead of page faulting to read the whole file was very high compared with just doing a single read() system call. This is because the kernel can sometimes satisify a read entirely in your time slice, meaning your code doesn't switch away. With a page fault, it seemed more likely that another program would be scheduled, making the file operation have a higher latency.
mmap has the advantage when you have random access on big files. Another advantage is that you access it with memory operations (memcpy, pointer arithmetic), without bothering with the buffering. Normal I/O can sometimes be quite difficult when using buffers when you have structures bigger than your buffer. The code to handle that is often difficult to get right, mmap is generally easier. This said, there are certain traps when working with mmap.
As people have already mentioned, mmap is quite costly to set up, so it is worth using only for a given size (varying from machine to machine).
For pure sequential accesses to the file, it is also not always the better solution, though an appropriate call to madvise can mitigate the problem.
You have to be careful with alignment restrictions of your architecture(SPARC, itanium), with read/write IO the buffers are often properly aligned and do not trap when dereferencing a casted pointer.
You also have to be careful that you do not access outside of the map. It can easily happen if you use string functions on your map, and your file does not contain a \0 at the end. It will work most of the time when your file size is not a multiple of the page size as the last page is filled with 0 (the mapped area is always in the size of a multiple of your page size).
In addition to other nice answers, a quote from Linux system programming written by Google's expert Robert Love:
Advantages of mmap( )
Manipulating files via mmap( ) has a handful of advantages over the
standard read( ) and write( ) system calls. Among them are:
Reading from and writing to a memory-mapped file avoids the
extraneous copy that occurs when using the read( ) or write( ) system
calls, where the data must be copied to and from a user-space buffer.
Aside from any potential page faults, reading from and writing to a memory-mapped file does not incur any system call or context switch
overhead. It is as simple as accessing memory.
When multiple processes map the same object into memory, the data is shared among all the processes. Read-only and shared writable
mappings are shared in their entirety; private writable mappings have
their not-yet-COW (copy-on-write) pages shared.
Seeking around the mapping involves trivial pointer manipulations. There is no need for the lseek( ) system call.
For these reasons, mmap( ) is a smart choice for many applications.
Disadvantages of mmap( )
There are a few points to keep in mind when using mmap( ):
Memory mappings are always an integer number of pages in size. Thus, the difference between the size of the backing file and an
integer number of pages is "wasted" as slack space. For small files, a
significant percentage of the mapping may be wasted. For example, with
4 KB pages, a 7 byte mapping wastes 4,089 bytes.
The memory mappings must fit into the process' address space. With a 32-bit address space, a very large number of various-sized mappings
can result in fragmentation of the address space, making it hard to
find large free contiguous regions. This problem, of course, is much
less apparent with a 64-bit address space.
There is overhead in creating and maintaining the memory mappings and associated data structures inside the kernel. This overhead is
generally obviated by the elimination of the double copy mentioned in
the previous section, particularly for larger and frequently accessed
files.
For these reasons, the benefits of mmap( ) are most greatly realized
when the mapped file is large (and thus any wasted space is a small
percentage of the total mapping), or when the total size of the mapped
file is evenly divisible by the page size (and thus there is no wasted
space).
Memory mapping has a potential for a huge speed advantage compared to traditional IO. It lets the operating system read the data from the source file as the pages in the memory mapped file are touched. This works by creating faulting pages, which the OS detects and then the OS loads the corresponding data from the file automatically.
This works the same way as the paging mechanism and is usually optimized for high speed I/O by reading data on system page boundaries and sizes (usually 4K) - a size for which most file system caches are optimized to.
An advantage that isn't listed yet is the ability of mmap() to keep a read-only mapping as clean pages. If one allocates a buffer in the process's address space, then uses read() to fill the buffer from a file, the memory pages corresponding to that buffer are now dirty since they have been written to.
Dirty pages can not be dropped from RAM by the kernel. If there is swap space, then they can be paged out to swap. But this is costly and on some systems, such as small embedded devices with only flash memory, there is no swap at all. In that case, the buffer will be stuck in RAM until the process exits, or perhaps gives it back withmadvise().
Non written to mmap() pages are clean. If the kernel needs RAM, it can simply drop them and use the RAM the pages were in. If the process that had the mapping accesses it again, it cause a page fault the kernel re-loads the pages from the file they came from originally. The same way they were populated in the first place.
This doesn't require more than one process using the mapped file to be an advantage.

Resources