Is Linux kernel splice() zero copy? - c

I know splice() is designed for zero copy and used Linux kernel pipe buffer to achieve that. For example if I wanted to copy data from one file descriptor(fp1) to another file descriptor(fp2), it didn't need to copy data from "kernel space->user space->kernel space". Instead it just copy data in kernel space the flow will be like "fp1 -> pipe_read -> pipe_write -> fp2". And my question is that dose kernel need to copy data between "fp1 -> pipe_read" and "pipe_write -> fp2"?
The Wikipedia said that:
Ideally, splice and vmsplice work by remapping pages and do not actually copy any data,
which may improve I/O performance. As linear addresses do not necessarily correspond to
contiguous physical addresses, this may not be possible in all cases and on all hardware
combinations.
I have already traced kernel source(3.12) for my question and I found that the flow between "fp1->write_pipe", in the end it would called kernel_readv() in fs/splice.c and then called "do_readv_writev()" and finally called "aio_write()"
558 static ssize_t kernel_readv(struct file *file, const struct iovec *vec,
559 unsigned long vlen, loff_t offset)
//*vec would point to struct page which belong to pipe
The flow between "read_pipe -> fp2" in the end would call "__kernel_write()" and then called "fp2->f_op->write()"
430 ssize_t __kernel_write(struct file *file, const char *buf, size_t count, loff_t *pos)
//*buf is the pipe buffer
And I thought both "aio_write()" and "file->f_op_write()" would perform really data copy, so does splice() really perform zero copy?

As I understand splice(), it will read pages of fd1 and the MMU will map these pages. The reference created by the mapping will be put into the pipe and handed over to fd2.
No real data should be copied in the process, as long as every participant has DMA available.
If no DMA is available you need to copy data.

splice most probably works zero-copy (there is no hard guarantee for that, but it almost certainly works that way for any reasonably recent hardware). Strictly following the docs, you would need to call it with SPLICE_F_MOVE so no actual copies are made, but I don't see how it would need to make one either way as long as there's DMA support (which is a rather fair assumption).
The same is not necessarily true with vmsplice involved since it (or a successive splice) only works zero-copy if the SPLICE_F_GIFT flag is provided (and in this case, I can see how it would not work otherwise, since the "source descriptor" is main memory) but this flag is broken in some and unsupported in other Linux versions, and badly documented on top.
For example, it is not clear what to do with the memory afterwards. The documentation used to say that you are not allowed to touch the gifted memory ever after, this was recently slightly reworded, but it isn't less ambiguous. It remains unclear what is to become of the memory region. Following the documentation, you would have to leak the memory. There seems to be no notification mechanism that tells you when it is safe to free the memory or reuse it.
aio_write is the userland (Glibc) implementation of asynchronous I/O which uses threads and the write syscall. This normally performs at least one copy from user space to kernel space.

Related

Where and why do read(2) and write(2) system calls copy to and from userspace?

I was reading about sendfile(2) recently, and the man page states:
sendfile() copies data between one file descriptor and another.
Because this copying is done within the kernel, sendfile() is more
efficient than the combination of read(2) and write(2), which would
require transferring data to and from user space.
It got me thinking, why exactly is the combination of read()/write() slower? The man page focuses on extra copying that has to happen to and from userspace, not the total number of calls required. I took a short look at the kernel code for read and write but didn't see the copy.
Why does the copy exist in the first place? Couldn't the kernel just read from the passed buffer on a write() without first copying the whole thing into kernel space?
What about asynchronous IO interfaces like AIO and io_uring? Do they also copy?
why exactly is the combination of read()/write() slower?
The manual page is quite clear about this. Doing read() and then write() requires to copy the data two times.
Why does the copy exist in the first place?
It should be quite obvious: since you invoke read, you want the data to be copied to the memory of your process, in the specified destination buffer. Same goes for write: you want the data to be copied from the memory of your process. The kernel doesn't really know that you just want to do a read + write, and that copying back and forth two times could be avoided.
When executing read, the data is copied by the kernel from the file descriptor to the process memory. When executing write the data is copied by the kernel from the process memory to the file descriptor.
Couldn't the kernel just read from the passed buffer on a write() without first copying the whole thing into kernel space?
The crucial point here is that when you read or write a file, the file has to be mapped from disk to memory by the kernel in order for it to be read or written. This is called memory-mapped file I/O, and it's a huge factor in the performance of modern operating systems.
The file content is already present in kernel memory, mapped as a memory page (or more). In case of a read, the data needs to be copied from that file kernel memory page to the process memory, while in case of a write, the data needs to be copied from the process memory to the file kernel memory page. The kernel will then ensure that the data in the kernel memory page(s) corresponding to the file is correctly written back to disk when needed (if needed at all).
This "intermediate" kernel mapping can be avoided, and the file mapped directly into userspace memory, but then the application would have to manage it manually, which is complicated and easy to mess up. This is why, for normal file operations, files are mapped into kernel memory. The kernel provides high level APIs for userspace programs to interact with them, and the hard work is left to the kernel itself.
The sendfile syscall is much faster because you do not need to perform the copy two times, but only once. Assuming that you want to do a sendfile of file A to file B, then all the kernel needs to do is to copy the data from A to B. However, in the case of read + write, the kernel needs to first copy from A to your process, and then from your process to B. This double copy is of course slower, and if you don't really need to read or manipulate the data, then it's a complete waste of time.
FYI, sendfile itself is basically an easy-to-use wrapper around splice (as can bee seen from the source code), which is a more generic syscall to perform zero-copy data transfer between file descriptors.
I took a short look at the kernel code for read and write but didn't see the copy.
In terms of kernel code, the whole process for reading a file is very complicated, but what the kernel ends up doing is a "special" version of memcpy(), called copy_to_user(), which copies the content of the file from the kernel memory to the userspace memory (doing the appropriate checks before performing the actual copy). More specifically, for files, the copyout() function is used, but the behavior is very similar, both end up calling raw_copy_to_user() (which is architecture-dependent).
What about asynchronous IO interfaces like AIO and io_uring? Do they also copy?
The aio_{read,write} libc functions defined by POSIX are just asynchronous wrappers around read and write (i.e. they still use read and write under the hood). These still copy data to/from userspace.
io_uring can provide zero-copy operations, when using the O_DIRECT flag of open (see the manual page):
O_DIRECT (since Linux 2.4.10)
Try to minimize cache effects of the I/O to and from this
file. In general this will degrade performance, but it is
useful in special situations, such as when applications do
their own caching. File I/O is done directly to/from user-
space buffers. The O_DIRECT flag on its own makes an effort
to transfer data synchronously, but does not give the
guarantees of the O_SYNC flag that data and necessary metadata
are transferred. To guarantee synchronous I/O, O_SYNC must be
used in addition to O_DIRECT. See NOTES below for further
discussion.
This should be done carefully though, as it could very well degrade performance in case the userspace application does not do the appropriate caching on its own (if needed).
See also this related detailed answer on asynchronous I/O, and this LWN article on io_uring.

Is there a portable way to discard a number of readable bytes from a socket-like file descriptor?

Is there a portable way to discard a number of incoming bytes from a socket without copying them to userspace? On a regular file, I could use lseek(), but on a socket, it's not possible. I have two scenarios where I might need it:
A stream of records is arriving on a file descriptor (which can be a TCP, a SOCK_STREAM type UNIX domain socket or potentially a pipe). Each record is preceeded by a fixed size header specifying its type and length, followed by data of variable length. I want to read the header first and if it's not of the type I'm interested in, I want to just discard the following data segment without transferring them into user space into a dummy buffer.
A stream of records of varying and unpredictable length is arriving on a file descriptor. Due to asynchronous nature, the records may still be incomplete when the fd becomes readable, or they may be complete but a piece of the next record already may be there when I try to read a fixed number of bytes into a buffer. I want to stop reading the fd at the exact boundary between the records so I don't need to manage partially loaded records I accidentally read from the fd. So, I use recv() with MSG_PEEK flag to read into a buffer, parse the record to determine its completeness and length, and then read again properly (thus actually removing data from the socket) to the exact length. This would copy the data twice - I want to avoid that by simply discarding the data buffered in the socket by an exact amount.
On Linux, I gather it is possible to achieve that by using splice() and redirecting the data to /dev/null without copying them to userspace. However, splice() is Linux-only, and the similar sendfile() that is supported on more platforms can't use a socket as input. My questions are:
Is there a portable way to achieve this? Something that would work on other UNIXes (primarily Solaris) as well that do not have splice()?
Is splice()-ing into /dev/null an efficient way to do this on Linux, or would it be a waste of effort?
Ideally, I would love to have a ssize_t discard(int fd, size_t count) that simply removes count of readable bytes from a file descriptor fd in kernel (i.e. without copying anything to userspace), blocks on blockable fd until the requested number of bytes is discarded, or returns the number of successfully discarded bytes or EAGAIN on a non-blocking fd just like read() would do. And advances the seek position on a regular file of course :)
The short answer is No, there is no portable way to do that.
The sendfile() approach is Linux-specific, because on most other OSes implementing it, the source must be a file or a shared memory object. (I haven't even checked if/in which Linux kernel versions, sendfile() from a socket descriptor to /dev/null is supported. I would be very suspicious of code that does that, to be honest.)
Looking at e.g. Linux kernel sources, and considering how little a ssize_t discard(fd, len) differs from a standard ssize_t read(fd, buf, len), it is obviously possible to add such support. One could even add it via an ioctl (say, SIOCISKIP) for easy support detection.
However, the problem is that you have designed an inefficient approach, and rather than fix the approach at the algorithmic level, you are looking for crutches that would make your approach perform better.
You see, it is very hard to show a case where the "extra copy" (from kernel buffers to userspace buffers) is an actual performance bottleneck. The number of syscalls (context switches between userspace and kernel space) sometimes is. If you sent a patch upstream implementing e.g. ioctl(socketfd, SIOCISKIP, bytes) for TCP and/or Unix domain stream sockets, they would point out that the performance increase this hopes to achieve is better obtained by not trying to obtain the data you don't need in the first place. (In other words, the way you are trying to do things, is inherently inefficient, and rather than create crutches to make that approach work better, you should just choose a better-performing approach.)
In your first case, a process receiving structured data framed by a type and length identifier, wishing to skip unneeded frames, is better fixed by fixing the transfer protocol. For example, the receiving side could inform the sending side which frames it is interested in (i.e., basic filtering approach). If you are stuck with a stupid protocol that you cannot replace for external reasons, you're on your own. (The FLOSS developer community is not, and should not be responsible for maintaining stupid decisions just because someone wails about it. Anyone is free to do so, but they'd need to do it in a manner that does not require others to work extra too.)
In your second case, you already read your data. Don't do that. Instead, use an userspace buffer large enough to hold two full size frames. Whenever you need more data, but the start of the frame is already past the midway of the buffer, memmove() the frame to start at the beginning of the buffer first.
When you have a partially read frame, and you have N unread bytes from that left that you are not interested in, read them into the unused portion of the buffer. There is always enough room, because you can overwrite the portion already used by the current frame, and its beginning is always within the first half of the buffer.
If the frames are small, say 65536 bytes maximum, you should use a tunable for the maximum buffer size. On most desktop and server machines, with high-bandwidth stream sockets, something like 2 MiB (2097152 bytes or more) is much more reasonable. It's not too much memory wasted, but you rarely do any memory copies (and when you do, they tend to be short). (You can even optimize the memory moves so that only full cachelines are copied, aligned, since leaving almost one cacheline of garbage at the start of the buffer is insignificant.)
I do HPC with large datasets (including text-form molecular data, where records are separated by newlines, and custom parsers for converting decimal integers or floating-point values are used for better performance), and this approach does work well in practice. Simply put, skipping data already in your buffer is not something you need to optimize; it is insignificant overhead compared to simply avoiding doing the things you do not need.
There is also the question of what you wish to optimize by doing that: the CPU time/resources used, or the wall clock used in the overall task. They are completely different things.
For example, if you need to sort a large number of text lines from some file, you use the least CPU time if you simply read the entire dataset to memory, construct an array of pointers to each line, sort the pointers, and finally write each line (using either internal buffering and/or POSIX writev() so that you do not need to do a write() syscall for each separate line).
However, if you wish to minimize the wall clock time used, you can use a binary heap or a balanced binary tree instead of an array of pointers, and heapify or insert-in-order each line completely read, so that when the last line is finally read, you already have the lines in their correct order. This is because the storage I/O (for all but pathological input cases, something like single-character lines) takes longer than sorting them using any robust sorting algorithm! The sorting algorithms that work inline (as data comes in) are typically not as CPU-efficient as those that work offline (on complete datasets), so this ends up using somewhat more CPU time; but because the CPU work is done at a time that is otherwise wasted waiting for the entire dataset to load into memory, it is completed in less wall clock time!
If there is need and interest, I can provide a practical example to illustrate the techniques. However, there is absolutely no magic involved, and any C programmer should be able to implement these (both the buffering scheme, and the sort scheme) on their own. (I do consider using resources like Linux man pages online and Wikipedia articles and pseudocode on for example binary heaps doing it "on your own". As long as you do not just copy-paste existing code, I consider it doing it "on your own", even if somebody or some resource helps you find the good, robust ways to do it.)

Can I adapt a function that writes to disk to write to memory

I have third-party library with a function that does some computation on the specified data, and writes the results to a file specified by file name:
int manipulateAndWrite(const char *filename,
const FOO_DATA *data);
I cannot change this function, or reimplement the computation in my own function, because I do not have the source.
To get the results, I currently need to read them from the file. I would prefer to avoid the write to and read from the file, and obtain the results into a memory buffer instead.
Can I pass a filepath that indicates writing to memory instead of a
filesystem?
Yes, you have several options, although only the first suggestion below is supported by POSIX. The rest of them are OS-specific, and may not be portable across all POSIX systems, although I do believe they work on all POSIXy systems.
You can use a named pipe (FIFO), and have a helper thread read from it concurrently to the writer function.
Because there is no file per se, the overhead is just the syscalls (write and read); basically just the overhead of interprocess communication, nothing to worry about. To conserve resources, do create the helper thread with a small stack (using pthread_attr_ etc.), as the default stack size tends to be huge (on the order of several megabytes; 2*PTHREAD_STACK_SIZE should be plenty for helper threads.)
You should ensure the named pipe is in a safe directory, accessible only to the user running the process, for example.
In many POSIXy systems, you can create a pipe or a socket pair, and access it via /dev/fd/N, where N is the descriptor number in decimal. (In Linux, /proc/self/fd/N also works.) This is not mandated by POSIX, so may not be available on all systems, but most do support it.
This way, there is no actual file per se, and the function writes to the pipe or socket. If the data written by the function is at most PIPE_BUF bytes, you can simply read the data from the pipe afterwards; otherwise, you do need to create a helper thread to read from the pipe or socket concurrently to the function, or the write will block.
In this case, too, the overhead is minimal.
On ELF-based POSIXy systems (basically all), you can interpose the open(), write(), and close() syscalls or C library functions.
(In Linux, there are two basic approaches, one using the linker --wrap, and one using dlsym(). Both work fine for this particular case. This ability to interpose functions is based on how ELF binaries are linked at run time, and is not directly related to POSIX.)
You first set up the interposing functions, so that open() detects if the filename matches your special "in-memory" file, and returns a dedicated descriptor number for it. (You may also need to interpose other functions, like ftruncate() or lseek(), depending on what the function actually does; in Linux, you can run a binary under ptrace to examine what syscalls it actually uses.)
When write() is called with the dedicated descriptor number, you simply memcpy() it to a memory buffer. You'll need to use global variables to describe the allocated size, size used, and the pointer to the memory buffer, and probably be prepared to resize/grow the buffer if necessary.
When close() is called with the dedicated descriptor number, you know the memory buffer is complete, and the contents ready for processing.
You can use a temporary file on a RAM filesystem. While the data is technically written to a file and read back from it, the operations involve RAM only.
You should arrange for a default path to one to be set at compile time, and for individual users to be able to override that for their personal needs, for example via an environment variable (YOURAPP_TMPDIR?).
There is no need for the application to try and look for a RAM-based filesystem: choices like this are, and should be, up to the user. The application should not even care what kind of filesystem the file is on, and should just use the specified directory.
You could not use that library function. Take a look at this on how to write to in-memory files:
Is it possible to create a C FILE object to read/write in memory

What does opening a file actually do?

In all programming languages (that I use at least), you must open a file before you can read or write to it.
But what does this open operation actually do?
Manual pages for typical functions dont actually tell you anything other than it 'opens a file for reading/writing':
http://www.cplusplus.com/reference/cstdio/fopen/
https://docs.python.org/3/library/functions.html#open
Obviously, through usage of the function you can tell it involves creation of some kind of object which facilitates accessing a file.
Another way of putting this would be, if I were to implement an open function, what would it need to do on Linux?
In almost every high-level language, the function that opens a file is a wrapper around the corresponding kernel system call. It may do other fancy stuff as well, but in contemporary operating systems, opening a file must always go through the kernel.
This is why the arguments of the fopen library function, or Python's open closely resemble the arguments of the open(2) system call.
In addition to opening the file, these functions usually set up a buffer that will be consequently used with the read/write operations. The purpose of this buffer is to ensure that whenever you want to read N bytes, the corresponding library call will return N bytes, regardless of whether the calls to the underlying system calls return less.
I am not actually interested in implementing my own function; just in understanding what the hell is going on...'beyond the language' if you like.
In Unix-like operating systems, a successful call to open returns a "file descriptor" which is merely an integer in the context of the user process. This descriptor is consequently passed to any call that interacts with the opened file, and after calling close on it, the descriptor becomes invalid.
It is important to note that the call to open acts like a validation point at which various checks are made. If not all of the conditions are met, the call fails by returning -1 instead of the descriptor, and the kind of error is indicated in errno. The essential checks are:
Whether the file exists;
Whether the calling process is privileged to open this file in the specified mode. This is determined by matching the file permissions, owner ID and group ID to the respective ID's of the calling process.
In the context of the kernel, there has to be some kind of mapping between the process' file descriptors and the physically opened files. The internal data structure that is mapped to the descriptor may contain yet another buffer that deals with block-based devices, or an internal pointer that points to the current read/write position.
I'd suggest you take a look at this guide through a simplified version of the open() system call. It uses the following code snippet, which is representative of what happens behind the scenes when you open a file.
0 int sys_open(const char *filename, int flags, int mode) {
1 char *tmp = getname(filename);
2 int fd = get_unused_fd();
3 struct file *f = filp_open(tmp, flags, mode);
4 fd_install(fd, f);
5 putname(tmp);
6 return fd;
7 }
Briefly, here's what that code does, line by line:
Allocate a block of kernel-controlled memory and copy the filename into it from user-controlled memory.
Pick an unused file descriptor, which you can think of as an integer index into a growable list of currently open files. Each process has its own such list, though it's maintained by the kernel; your code can't access it directly. An entry in the list contains whatever information the underlying filesystem will use to pull bytes off the disk, such as inode number, process permissions, open flags, and so on.
The filp_open function has the implementation
struct file *filp_open(const char *filename, int flags, int mode) {
struct nameidata nd;
open_namei(filename, flags, mode, &nd);
return dentry_open(nd.dentry, nd.mnt, flags);
}
which does two things:
Use the filesystem to look up the inode (or more generally, whatever sort of internal identifier the filesystem uses) corresponding to the filename or path that was passed in.
Create a struct file with the essential information about the inode and return it. This struct becomes the entry in that list of open files that I mentioned earlier.
Store ("install") the returned struct into the process's list of open files.
Free the allocated block of kernel-controlled memory.
Return the file descriptor, which can then be passed to file operation functions like read(), write(), and close(). Each of these will hand off control to the kernel, which can use the file descriptor to look up the corresponding file pointer in the process's list, and use the information in that file pointer to actually perform the reading, writing, or closing.
If you're feeling ambitious, you can compare this simplified example to the implementation of the open() system call in the Linux kernel, a function called do_sys_open(). You shouldn't have any trouble finding the similarities.
Of course, this is only the "top layer" of what happens when you call open() - or more precisely, it's the highest-level piece of kernel code that gets invoked in the process of opening a file. A high-level programming language might add additional layers on top of this. There's a lot that goes on at lower levels. (Thanks to Ruslan and pjc50 for explaining.) Roughly, from top to bottom:
open_namei() and dentry_open() invoke filesystem code, which is also part of the kernel, to access metadata and content for files and directories. The filesystem reads raw bytes from the disk and interprets those byte patterns as a tree of files and directories.
The filesystem uses the block device layer, again part of the kernel, to obtain those raw bytes from the drive. (Fun fact: Linux lets you access raw data from the block device layer using /dev/sda and the like.)
The block device layer invokes a storage device driver, which is also kernel code, to translate from a medium-level instruction like "read sector X" to individual input/output instructions in machine code. There are several types of storage device drivers, including IDE, (S)ATA, SCSI, Firewire, and so on, corresponding to the different communication standards that a drive could use. (Note that the naming is a mess.)
The I/O instructions use the built-in capabilities of the processor chip and the motherboard controller to send and receive electrical signals on the wire going to the physical drive. This is hardware, not software.
On the other end of the wire, the disk's firmware (embedded control code) interprets the electrical signals to spin the platters and move the heads (HDD), or read a flash ROM cell (SSD), or whatever is necessary to access data on that type of storage device.
This may also be somewhat incorrect due to caching. :-P Seriously though, there are many details that I've left out - a person (not me) could write multiple books describing how this whole process works. But that should give you an idea.
Any file system or operating system you want to talk about is fine by me. Nice!
On a ZX Spectrum, initializing a LOAD command will put the system into a tight loop, reading the Audio In line.
Start-of-data is indicated by a constant tone, and after that a sequence of long/short pulses follow, where a short pulse is for a binary 0 and a longer one for a binary 1 (https://en.wikipedia.org/wiki/ZX_Spectrum_software). The tight load loop gathers bits until it fills a byte (8 bits), stores this into memory, increases the memory pointer, then loops back to scan for more bits.
Typically, the first thing a loader would read is a short, fixed format header, indicating at least the number of bytes to expect, and possibly additional information such as file name, file type and loading address. After reading this short header, the program could decide whether to continue loading the main bulk of the data, or exit the loading routine and display an appropriate message for the user.
An End-of-file state could be recognized by receiving as many bytes as expected (either a fixed number of bytes, hardwired in the software, or a variable number such as indicated in a header). An error was thrown if the loading loop did not receive a pulse in the expected frequency range for a certain amount of time.
A little background on this answer
The procedure described loads data from a regular audio tape - hence the need to scan Audio In (it connected with a standard plug to tape recorders). A LOAD command is technically the same as open a file - but it's physically tied to actually loading the file. This is because the tape recorder is not controlled by the computer, and you cannot (successfully) open a file but not load it.
The "tight loop" is mentioned because (1) the CPU, a Z80-A (if memory serves), was really slow: 3.5 MHz, and (2) the Spectrum had no internal clock! That means that it had to accurately keep count of the T-states (instruction times) for every. single. instruction. inside that loop, just to maintain the accurate beep timing.
Fortunately, that low CPU speed had the distinct advantage that you could calculate the number of cycles on a piece of paper, and thus the real world time that they would take.
It depends on the operating system what exactly happens when you open a file. Below I describe what happens in Linux as it gives you an idea what happens when you open a file and you could check the source code if you are interested in more detail. I am not covering permissions as it would make this answer too long.
In Linux every file is recognised by a structure called inode. Each structure has an unique number and every file only gets one inode number. This structure stores meta data for a file, for example file-size, file-permissions, time stamps and pointer to disk blocks, however, not the actual file name itself. Each file (and directory) contains a file name entry and the inode number for lookup. When you open a file, assuming you have the relevant permissions, a file descriptor is created using the unique inode number associated with file name. As many processes/applications can point to the same file, inode has a link field that maintains the total count of links to the file. If a file is present in a directory, its link count is one, if it has a hard link its link count will be two and if a file is opened by a process, the link count will be incremented by 1.
Bookkeeping, mostly. This includes various checks like "Does the file exist?" and "Do I have the permissions to open this file for writing?".
But that's all kernel stuff - unless you're implementing your own toy OS, there isn't much to delve into (if you are, have fun - it's a great learning experience). Of course, you should still learn all the possible error codes you can receive while opening a file, so that you can handle them properly - but those are usually nice little abstractions.
The most important part on the code level is that it gives you a handle to the open file, which you use for all of the other operations you do with a file. Couldn't you use the filename instead of this arbitrary handle? Well, sure - but using a handle gives you some advantages:
The system can keep track of all the files that are currently open, and prevent them from being deleted (for example).
Modern OSs are built around handles - there's tons of useful things you can do with handles, and all the different kinds of handles behave almost identically. For example, when an asynchronous I/O operation completes on a Windows file handle, the handle is signalled - this allows you to block on the handle until it's signalled, or to complete the operation entirely asynchronously. Waiting on a file handle is exactly the same as waiting on a thread handle (signalled e.g. when the thread ends), a process handle (again, signalled when the process ends), or a socket (when some asynchronous operation completes). Just as importantly, handles are owned by their respective processes, so when a process is terminated unexpectedly (or the application is poorly written), the OS knows what handles it can release.
Most operations are positional - you read from the last position in your file. By using a handle to identify a particular "opening" of a file, you can have multiple concurrent handles to the same file, each reading from their own places. In a way, the handle acts as a moveable window into the file (and a way to issue asynchronous I/O requests, which are very handy).
Handles are much smaller than file names. A handle is usually the size of a pointer, typically 4 or 8 bytes. On the other hand, filenames can have hundreds of bytes.
Handles allow the OS to move the file, even though applications have it open - the handle is still valid, and it still points to the same file, even though the file name has changed.
There's also some other tricks you can do (for example, share handles between processes to have a communication channel without using a physical file; on unix systems, files are also used for devices and various other virtual channels, so this isn't strictly necessary), but they aren't really tied to the open operation itself, so I'm not going to delve into that.
At the core of it when opening for reading nothing fancy actually needs to happen. All it needs to do is check the file exists and the application has enough privileges to read it and create a handle on which you can issue read commands to the file.
It's on those commands that actual reading will get dispatched.
The OS will often get a head start on reading by starting a read operation to fill the buffer associated with the handle. Then when you actually do the read it can return the contents of the buffer immediately rather then needing to wait on disk IO.
For opening a new file for write the OS will need to add a entry in the directory for the new (currently empty) file. And again a handle is created on which you can issue the write commands.
Basically, a call to open needs to find the file, and then record whatever it needs to so that later I/O operations can find it again. That's quite vague, but it will be true on all the operating systems I can immediately think of. The specifics vary from platform to platform. Many answers already on here talk about modern-day desktop operating systems. I've done a little programming on CP/M, so I will offer my knowledge about how it works on CP/M (MS-DOS probably works in the same way, but for security reasons, it is not normally done like this today).
On CP/M you have a thing called the FCB (as you mentioned C, you could call it a struct; it really is a 35-byte contiguous area in RAM containing various fields). The FCB has fields to write the file-name and a (4-bit) integer identifying the disk drive. Then, when you call the kernel's Open File, you pass a pointer to this struct by placing it in one of the CPU's registers. Some time later, the operating system returns with the struct slightly changed. Whatever I/O you do to this file, you pass a pointer to this struct to the system call.
What does CP/M do with this FCB? It reserves certain fields for its own use, and uses these to keep track of the file, so you had better not ever touch them from inside your program. The Open File operation searches through the table at the start of the disk for a file with the same name as what's in the FCB (the '?' wildcard character matches any character). If it finds a file, it copies some information into the FCB, including the file's physical location(s) on the disk, so that subsequent I/O calls ultimately call the BIOS which may pass these locations to the disk driver. At this level, specifics vary.
In simple terms, when you open a file you are actually requesting the operating system to load the desired file ( copy the contents of file ) from the secondary storage to ram for processing. And the reason behind this ( Loading a file ) is because you cannot process the file directly from the Hard-disk because of its extremely slow speed compared to Ram.
The open command will generate a system call which in turn copies the contents of the file from the secondary storage ( Hard-disk ) to Primary storage ( Ram ).
And we 'Close' a file because the modified contents of the file has to be reflected to the original file which is in the hard-disk. :)
Hope that helps.

what's the proper buffer size for 'write' function?

I am using the low-level I/O function 'write' to write some data to disk in my code (C language on Linux). First, I accumulate the data in a memory buffer, and then I use 'write' to write the data to disk when the buffer is full. So what's the best buffer size for 'write'? According to my tests it isn't the bigger the faster, so I am here to look for the answer.
There is probably some advantage in doing writes which are multiples of the filesystem block size, especially if you are updating a file in place. If you write less than a partial block to a file, the OS has to read the old block, combine in the new contents and then write it out. This doesn't necessarily happen if you rapidly write small pieces in sequence because the updates will be done on buffers in memory which are flushed later. Still, once in a while you could be triggering some inefficiency if you are not filling a block (and a properly aligned one: multiple of block size at an offset which is a multiple of the block size) with each write operation.
This issue of transfer size does not necessarily go away with mmap. If you map a file, and then memcpy some data into the map, you are making a page dirty. That page has to be flushed at some later time: it is indeterminate when. If you make another memcpy which touches the same page, that page could be clean now and you're making it dirty again. So it gets written twice. Page-aligned copies of multiples-of a page size will be the way to go.
You'll want it to be a multiple of the CPU page size, in order to use memory as efficiently as possible.
But ideally you want to use mmap instead, so that you never have to deal with buffers yourself.
You could use BUFSIZ defined in <stdio.h>
Otherwise, use a small multiple of the page size sysconf(_SC_PAGESIZE) (e.g. twice that value). Most Linux systems have 4Kbytes pages (which is often the same as or a small multiple of the filesystem block size).
As other replied, using the mmap(2) system call could help. GNU systems (e.g. Linux) have an extension: the second mode string of fopen may contain the latter m and when that happens, the GNU libc try to mmap.
If you deal with data nearly as large as your RAM (or half of it), you might want to also use madvise(2) to fine-tune performance of mmap.
See also this answer to a question quite similar to yours. (You could use 64Kbytes as a reasonable buffer size).
The "best" size depends a great deal on the underlying file system.
The stat and fstat calls fill in a data structure, struct stat, that includes the following field:
blksize_t st_blksize; /* blocksize for file system I/O */
The OS is responsible for filling this field with a "good size" for write() blocks. However, it's also important to call write() with memory that is "well aligned" (e.g., the result of malloc calls). The easiest way to get this to happen is to use the provided <stdio.h> stream interface (with FILE * objects).
Using mmap, as in other answers here, can also be very fast for many cases. Note that it's not well suited to some kinds of streams (e.g., sockets and pipes) though.
It depends on the amount of RAM, VM, etc. as well as the amount of data being written. The more general answer is to benchmark what buffer works best for the load you're dealing with, and use what works the best.

Resources