I'm trying to understand how asynchronous file operations being emulated using threads. I've found next-to-nothing materials to read about the subject.
Is it possible that:
a process uses a thread to open a regular file (HDD).
the parent gets the file descriptor from the thread, now it may close the thread.
the parent uses the file descriptor with a new thread, reading X bytes from the file.
the parent gets the file descriptor with the seek-position of the current file state.
the parent may repeat these operations, without the need to open, or seek, every time it wishes to "continue" reading a new chunk of the file?
This is just a wild guess of mine, would appreciate if anybody mind to shed more light to clarify how it's being emulated efficiently.
UPDATE:
By efficient I actually mean that I don't want the thread to "wait" since the moment the file been opened. Think of a HTTP non-blocking daemon which serves a client with a huge file, you want to use the thread to read chunks of the file without blocking the daemon - but you don't want to keep the thread busy while "waiting" for the actual transfer to take place, you want to use the thread for other blocking operations of other clients.
To understand asynchronous I/O better, it may be helpful to think in terms of overlapping operation. That is, the number of pending operations (operations that have been started but not yet completed) can simutaneously go above one.
A diagram that explains asynchronous I/O might look like this: http://msdn.microsoft.com/en-us/library/aa365683(VS.85).aspx
If you are using the asynchronous I/O capabilities provided by the underlying Operating System, then it is possible to asynchronously read from multiple files without spawning a equal number of threads.
If your underlying Operating System does not provide asynchronous I/O, or if you decide not to use it, in other words, you wish to emulate asynchronous operation by only using blocking I/O (the regular Read/Write provided by the Operating System) then it is necessary to spawn as many threads as the number of simutaneous I/O operations. This is because when a thread is making a function call to blocking I/O, the thread cannot continue its execution until the operation finishes. In order to start another blocking I/O operation, that operation has to be issued from another thread that is not already occupied.
When you open/create a file fire up a thread. Now store that thread id/ptr as your file handle.
Basically the thread will do nothing except sit in a loop waiting for an "event". A semaphore would be good here. When you want to do a read then you add the read command to a queue (remember to critical section the stack add), return a unique id, and then you increment the semaphore. If the thread is asleep it will now wake up and grab the first message off the queue and process it. When it has completed you remove the command from the queue.
To poll if a file read has completed you can, simply, check to see if its in the command queue. If its not there then the command has completed.
Furthermore if you want to allow synchronous reads as well then you can wait after sending the message through for an "event" to get triggered by the completion. You then check to see if the unique id is the queue and if it isn't you return control. If it still is then you go back to a wait state until the relevant unique id has been processed.
Related
I'm experimenting with a fictional server/client application where the client side launches request threads by a (possibly very large) period of time, with small in-between delays. Each request thread writes on the 'public' fifo (known by all client and server threads) the contents of the request, and receives the server answer in a 'private' fifo that is created by the server with a name that is implicitly known (in my case, it's 'tmp/processId.threadId').
The public fifo is opened once in the main (request thread spawner) thread so that all request threads may write to it.
Since I don't care about the return value of my request threads and I can't make sure how many request threads I create (so that I store their ids and join them later), I opted to create the threads in a detached state, exit the main thread when the specified timeout expires and let the already spawned threads live on their own.
All of this is fine, however, I'm not closing the public fifo anywhere after all spawned request threads finish: after all, I did exit the main thread without waiting. Is this a small kind of disaster, in which case I absolutely need to count the active threads (perhaps with a condition variable) and close the fifo when it's 0? Should I just accept that the file is not explicitly getting closed, and let the OS do it?
All of this is fine, however, I'm not closing the public fifo anywhere
after all spawned request threads finish: after all, I did exit the
main thread without waiting. Is this a small kind of disaster, in
which case I absolutely need to count the active threads (perhaps with
a condition variable) and close the fifo when it's 0? Should I just
accept that the file is not explicitly getting closed, and let the OS
do it?
Supposing that you genuinely mean a FIFO, such as might be created via mkfifo(), no, it's not a particular issue that the process does not explicitly close it. If any open handles on it remain when the process terminates, they will be closed. Depending on the nature of the termination, it might be that pending data are not flushed, but that is of no consequence if the FIFO is used only for communication among the threads of one process.
But it possibly is an issue that the process does not remove the FIFO. A FIFO has filesystem persistence. Once you create one, it lives until it no longer has any links to the filesystem and is no longer open in any process (like any other file). Merely closing it does not cause it to be removed. Aside from leaving clutter on your filesystem, this might cause issues for concurrent or future runs of the program.
If indeed you are using your FIFOs only for communication among the threads of a single process, then you would probably be better served by pipes.
I managed to solve this issue setting up a cleanup rotine with atexit, which is called when the process terminates, ie. all threads finish their work.
I am trying to develop a program with POSIX threads in which i have a child thread which will be updating the content of a file and the database between certain intervals and there will be other children who reads data from the file and database all the time. So i don't want any thread to read the file or database while they are being written by the single updater thread. So my idea is to make all other children threads sleep from the child thread which will update the file and database. sleep() makes the calling thread sleep. Is there any way the above scenario can be implemented?!
EDIT:
I have two different functions for reading and writing the file. Most of the threads access the read method so they aren't vulnerable but they might be if they try to read in between while the periodic thread which accesses the write method is updating the file's contents.
You do not want to use sleep for this at all. Instead, use a reader/writer lock. The updater thread must acquire the lock (in write mode) before it modifies the data. And the other threads must acquire the lock (in read mode) before reading the data.
Note that if your reader threads are reading continuously, the writer will get starved and never acquire the lock. So you will need some separate mechanism such as a flag the updater can set that tells the readers to please stop reading and release their locks. If the readers only read occasionally this shouldn't be such an issue (unless there are tons of readers in which case you may have an architectural problem).
From the MSDN Documentation:
The transport providers allow an application to invoke send and receive operations from within the context of the socket I/O completion routine, and guarantee that, for a given socket, I/O completion routines will not be nested. This permits time-sensitive data transmissions to occur entirely within a preemptive context.
In our system we do have one thread calling WSARecvFrom() for multiple sockets. There is one CompletionRoutine for that thread handling all call backs from WSARecvFrom() opverlapped I/O.
Our tests showed that this Completion Routine is called like triggered from an Interrupt. Called for a socket while still processing the completeion Routine from an other socket.
How do we can prevent that this completion Routine gets not called while it is still processing Input from an other socket?
What Serialisation of data processing can we use ?
Note there are hundrets of sockets receiving and sending realtime data. Synchronisation with waiting for multiple objects is not applicable as there is a maximum of 64 defined by the Win32 API.
We can not use a Semaphore because when newly called the old ongoing processing is interreupted so a Semaphore would no be realeased and new processing blocks for ever.
Critical Sections or Mutex is not an Option because the Completion Routine Call back is made from within the same thread so CS or mutex would accept anyway and would not wait till the old processing is finished.
Does anyone have an Idea or even better approach to serialze (synchronize) data processing ?
If you read the WSARecvFrom() documentation again more carefully, it also says:
The completion routine follows the same rules as stipulated for Windows file I/O completion routines. The completion routine will not be invoked until the thread is in an alertable wait state such as can occur when the function WSAWaitForMultipleEvents with the fAlertable parameter set to TRUE is invoked.
The Alertable I/O documentation then states:
When the thread enters an alertable state, the following events occur:
The kernel checks the thread's APC queue. If the queue contains callback function pointers, the kernel removes the pointer from the queue and sends it to the thread.
The thread executes the callback function.
Steps 1 and 2 are repeated for each pointer remaining in the queue.
When the queue is empty, the thread returns from the function that placed it in an alertable state.
So it should be practically impossible for a given thread to overlap multiple pending completion routines on top of each other, because the thread receives and processes the routines in a serialized manner. The only way I could see that being different is if a completion routine is doing something to put the thread into a second alertable state while a previous alertable state is still in effect. I'm not sure what Windows does in that situation, but you should avoid doing it anyway.
Note there are hundrets of sockets receiving and sending realtime data. Synchronisation with waiting for multiple objects is not applicable as there is a maximum of 64 defined by the Win32 API
The WaitForMultipleObjects() documentation tells you how to work around that limitation:
To wait on more than MAXIMUM_WAIT_OBJECTS handles, use one of the following methods:
• Create a thread to wait on MAXIMUM_WAIT_OBJECTS handles, then wait on that thread plus the other handles. Use this technique to break the handles into groups of MAXIMUM_WAIT_OBJECTS.
• Call RegisterWaitForSingleObject to wait on each handle. A wait thread from the thread pool waits on MAXIMUM_WAIT_OBJECTS registered objects and assigns a worker thread after the object is signaled or the time-out interval expires.
I wouldn't wait on the sockets anyway, that is not very efficient. Using completion routines is fine as long as they are doing safe things.
Otherwise, I would suggest you stop using completion routines and switch to using an I/O Completion Port for the socket I/O instead. Then you are in more control of when the completion results are reported to you, because you have to call GetQueuedCompletionStatus() yourself to get the results of each I/O operation. You can have multiple sockets associated with a single IOCP, and then have a small pool of threads (typically one thread per CPU core works best) all calling GetQueuedCompletionStatus() on that IOCP. This way, you can process multiple I/O results in parallel, as they will be in different thread contexts and cannot overlap each other in the same thread. This does mean, however, that you can perform an I/O operation in one thread and the result may show up in a different thread. Just make sure your completion processing is thread-safe.
First of all let me thanks for all the helpful hints and comments to my question.
We did stop now using completion routines. We changed the application to use completion ports.
The biggest problem we had with completion routines is that every time the thread goes into an alertable state the completion routines can (and will) be called again from the OS. As seen in the Debugger also calling WSASendTo() from inside the completion routine puts the thread into an alertable state. So the completion routine is executed again before the previous execution of the completion routine comes to its end.
This makes it nearly impossible to synchronize data processing from multiple different sockets.
The approach using Completion Ports seems to be the perfect one. You then have control about what are doing when you are released from GetQueuedCompletionStatus() for processing a data buffer. You have to and you can do the synchronization of data processing by yourself in a linear fashion without being interrupted and newly executed while trying to process the data.
In the context of block devices like a file; are Linux kernel AIO functions like io_submit() only asynchronous within the supplied queue of I/O operations, or are they (also) asynchronous across several prosesses and/or threads that also have queues of I/O operations on the same file.
Doc says: The io_submit() system call queues nr I/O request blocks for
processing in the AIO context ctx_id. The iocbpp argument should be
an array of nr AIO control blocks, which will be submitted to context
ctx_id.
Update:
Example:
If I spawn two threads, both have 100 queued I/O operations on the same file and both call io_submit() at approx. the same time; will all 200 I/O operations be asynchronous or will thread #1's 100 I/O operations only be asynchronous in regards to each other but block thread #2 until all thread #1's I/O operations are done?
The only PART of asynchronous behaviour that your application should care about is within your application. Yes, other processes are likely going to ALSO write data to the disk at some point during the runtime of your application. There is very little you can do to stop that in a multitasking, multiuser and potentially multiprocessor system.
The general idea here is that your application doesn't block, which is the way that read or write [and their more advanced cousins, fread, fwrite, etc).
If you want to stop other processes from touching "your" files, then you need to use file-locking or something similar.
When a set of io requests is submitted with io_submit, the system call returns immediately. From the point of view of the thread emitting the requests, the execution of the commands embedded in the requests is asynchronous. The thread will have to query the OS to know the result, and is free to do what it wants in the mean time.
Now, if two threads happens to emit each a set of requests, they will both fall in the same situation. They will both have to ask the OS about the advancement of their respective IO commands. None of the threads will be blocked.
From the AIO framework point of view, it is entirely possible to make the OS actually execute the requests before returning from the io_submit call for either or all the threads invoking it, but the API remains the same: userland threads will still manipulate the API as an async one, obtaining a token for a future result from the API when it posts its requests, and using that token to get the real result.
In the specific case of linux AIO, the token is the context created beforehand, and the result check syscall is io_getevents, which reports an "event" (ie. a result) for each completed request.
Regarding your example, is it possible that during the second syscall, all the requests of the first thread get completed? I don't see a reason for this never happening at all, but if both threads post 100 requests very close to each other, then it seems very unlikely. A more likely scenario is that several requests of the first thread to call io_submit got completed when the second thread makes its own call to io_submit, but at any rate that call will not block.
From epoll's man page:
epoll is a variant of poll(2) that can be used either as an edge-triggered
or a level-triggered interface
When would one use the edge triggered option? The man page gives an example that uses it, but I don't see why it is necessary in the example.
When an FD becomes read or write ready, you might not necessarily want to read (or write) all the data immediately.
Level-triggered epoll will keep nagging you as long as the FD remains ready, whereas edge-triggered won't bother you again until the next time you get an EAGAIN (so it's more complicated to code around, but can be more efficient depending on what you need to do).
Say you're writing from a resource to an FD. If you register your interest for that FD becoming write ready as level-triggered, you'll get constant notification that the FD is still ready for writing. If the resource isn't yet available, that's a waste of a wake-up, because you can't write any more anyway.
If you were to add it as edge-triggered instead, you'd get notification that the FD was write ready once, then when the other resource becomes ready you write as much as you can. Then if write(2) returns EAGAIN, you stop writing and wait for the next notification.
The same applies for reading, because you might not want to pull all the data into user-space before you're ready to do whatever you want to do with it (thus having to buffer it, etc etc). With edge-triggered epoll you get told when it's ready to read, and then can remember that and do the actual reading "as and when".
In my experiments, ET doesn't guarantee that only one thread wakes up, although it often wakes up only one. The EPOLLONESHOT flag is for this purpose.
Level triggered
Use level trigger mode when you can't consume all the data in the FD and want epoll to keep triggering while data is available.
For example, if you want to receive large files from FD, and you cannot consume all the file data from the FD at one time, and want to keep the triggering continue for the next consumption. The level trigger mode could be suitable for this case.
Disadvantage
thundering herd
The EPOLLEXCLUSIVE directive is meant to prevent the thundering heard phenomenon
less efficiency
When a read/write event occurs on the monitored file descriptor, epoll_wait() notifies the handler to read or write. If you don’t read or write all the data at once (e.g., the read/write buffer is too small), then the next time epoll_wait() is called, it will notify you to continue reading or writing on the file descriptor you didn’t finish reading or writing on, but of course, if you never read or write, it will keep notifying you.
If the system has a large number of ready file descriptors that you don’t need to read or write, and they return every time, this can greatly reduce the efficiency of the handler retrieving the ready file descriptors it cares about.
use cases
redis epoll Since the IO thread of Redis is single-threaded, level trigger mode is used.
Edge triggered
Use edge triggered mode and make sure all data available is buffered and will be handled eventually.
As Chris Dodd mentioned in the comments
ET is also particularly nice with a multithreaded server on a multicore machine. You can run one thread per core and have all of them call epoll_wait on the same FD. When data comes in on an FD, exactly one thread will be woken to handle it
use cases
nginx epoll model
golang netpoll