How to synchronise processing from WSARecvFrom() when using CompletionRoutine with multiple sockets - c

From the MSDN Documentation:
The transport providers allow an application to invoke send and receive operations from within the context of the socket I/O completion routine, and guarantee that, for a given socket, I/O completion routines will not be nested. This permits time-sensitive data transmissions to occur entirely within a preemptive context.
In our system we do have one thread calling WSARecvFrom() for multiple sockets. There is one CompletionRoutine for that thread handling all call backs from WSARecvFrom() opverlapped I/O.
Our tests showed that this Completion Routine is called like triggered from an Interrupt. Called for a socket while still processing the completeion Routine from an other socket.
How do we can prevent that this completion Routine gets not called while it is still processing Input from an other socket?
What Serialisation of data processing can we use ?
Note there are hundrets of sockets receiving and sending realtime data. Synchronisation with waiting for multiple objects is not applicable as there is a maximum of 64 defined by the Win32 API.
We can not use a Semaphore because when newly called the old ongoing processing is interreupted so a Semaphore would no be realeased and new processing blocks for ever.
Critical Sections or Mutex is not an Option because the Completion Routine Call back is made from within the same thread so CS or mutex would accept anyway and would not wait till the old processing is finished.
Does anyone have an Idea or even better approach to serialze (synchronize) data processing ?

If you read the WSARecvFrom() documentation again more carefully, it also says:
The completion routine follows the same rules as stipulated for Windows file I/O completion routines. The completion routine will not be invoked until the thread is in an alertable wait state such as can occur when the function WSAWaitForMultipleEvents with the fAlertable parameter set to TRUE is invoked.
The Alertable I/O documentation then states:
When the thread enters an alertable state, the following events occur:
The kernel checks the thread's APC queue. If the queue contains callback function pointers, the kernel removes the pointer from the queue and sends it to the thread.
The thread executes the callback function.
Steps 1 and 2 are repeated for each pointer remaining in the queue.
When the queue is empty, the thread returns from the function that placed it in an alertable state.
So it should be practically impossible for a given thread to overlap multiple pending completion routines on top of each other, because the thread receives and processes the routines in a serialized manner. The only way I could see that being different is if a completion routine is doing something to put the thread into a second alertable state while a previous alertable state is still in effect. I'm not sure what Windows does in that situation, but you should avoid doing it anyway.
Note there are hundrets of sockets receiving and sending realtime data. Synchronisation with waiting for multiple objects is not applicable as there is a maximum of 64 defined by the Win32 API
The WaitForMultipleObjects() documentation tells you how to work around that limitation:
To wait on more than MAXIMUM_WAIT_OBJECTS handles, use one of the following methods:
• Create a thread to wait on MAXIMUM_WAIT_OBJECTS handles, then wait on that thread plus the other handles. Use this technique to break the handles into groups of MAXIMUM_WAIT_OBJECTS.
• Call RegisterWaitForSingleObject to wait on each handle. A wait thread from the thread pool waits on MAXIMUM_WAIT_OBJECTS registered objects and assigns a worker thread after the object is signaled or the time-out interval expires.
I wouldn't wait on the sockets anyway, that is not very efficient. Using completion routines is fine as long as they are doing safe things.
Otherwise, I would suggest you stop using completion routines and switch to using an I/O Completion Port for the socket I/O instead. Then you are in more control of when the completion results are reported to you, because you have to call GetQueuedCompletionStatus() yourself to get the results of each I/O operation. You can have multiple sockets associated with a single IOCP, and then have a small pool of threads (typically one thread per CPU core works best) all calling GetQueuedCompletionStatus() on that IOCP. This way, you can process multiple I/O results in parallel, as they will be in different thread contexts and cannot overlap each other in the same thread. This does mean, however, that you can perform an I/O operation in one thread and the result may show up in a different thread. Just make sure your completion processing is thread-safe.

First of all let me thanks for all the helpful hints and comments to my question.
We did stop now using completion routines. We changed the application to use completion ports.
The biggest problem we had with completion routines is that every time the thread goes into an alertable state the completion routines can (and will) be called again from the OS. As seen in the Debugger also calling WSASendTo() from inside the completion routine puts the thread into an alertable state. So the completion routine is executed again before the previous execution of the completion routine comes to its end.
This makes it nearly impossible to synchronize data processing from multiple different sockets.
The approach using Completion Ports seems to be the perfect one. You then have control about what are doing when you are released from GetQueuedCompletionStatus() for processing a data buffer. You have to and you can do the synchronization of data processing by yourself in a linear fashion without being interrupted and newly executed while trying to process the data.

Related

What does blocking mode mean?

I can't seem to find a useful definition for "blocking" (or for that matter "non-blocking") when used in relation to POSIX C functions.
For example read() may be called in blocking or non-blocking mode on a FIFO pipe. If called in blocking mode, it will block until it's opened elsewhere for writing.
Will this blocking just seize up the thread? Or the process? Or will it pause the rendering of the multiverse?
Blocking means that the thread is de-scheduled off the CPU while waiting for an event to happen. When a thread is de-scheduled it doesn't consume any CPU cycles and allows other threads to make progress or put the CPU in a lower power state if there are no other threads waiting to run.
One thread blocking doesn't affect other threads you may have in the process. A blocking call only blocks the calling thread.
For example, read blocks when there is no data in the pipe to read. When data arrives it "unblocks" and the read call returns.
In the kernel each file description and other objects one can block on (e.g. mutex or condition_variable) have a list of waiting threads. When a thread blocks on an object it is appended to that object's wait list and de-scheduled off the CPU. Whenever an event for the object occurs the kernel checks the wait list for waiting threads for such an event and if there are any one or multiple threads get scheduled again and the blocking calls eventually return.
In non-blocking mode such calls do not block but return immediately an error code with errno being set to EWOULDBLOCK or EAGAIN, which are nowadays two different names for the same errno value. (pthread calls do not set errno but return the error value directly).

Posix select()/poll() and pthread IPC

This is kind of generic question - however I met this problem several times already and I still haven't found the best possible solution.
Let's imagine you have program (e.g. HTTP application server) that is multithreaded and that communicates over sockets (TCP, Unix, ...). Main thread is using asynchronous IO and select() or poll() POSIX calls to dispatch traffic from/to sockets. There are also worker threads that process requests and provides responses. To send response back to the client, worker thread synchronises with main thread (that polls) 'somehow'. Core of the questions is 'how' - in terms of what is efficient. I can use pipe() - socket based IPC mechanism - but this seems to me as quite huge overhead. I tend to use some pthread IPC techniques like mutex, condition variables etc. … but these will not work with select() or poll().
Is there a common technique in POSIX (and surroundings) that address this conflict?
I guess on Windows there is WaitForMultipleObjects() function that allows that.
Example program is crafted to illustrate an issue, I know that I can design master/worker pattern in a different way but this is not what I'm asking for. I have other cases where I'm in the same situation.
You could use a signal to poke the worker thread, which will interrupt the select() call and return EINTR. This gets even easier to do with pselect().
For this to work:
decide on a signal (or allocate a real-time signal)
attach an empty handler function to it (if the signal were ignored, the system call would be automatically restarted)
block the signal, at least in the worker thread.
use the signal mask argument in pselect() to unblock the signal while waiting.
Between threads, you can use pthread_kill to deliver the signal to the worker thread specifically. When another process should send the signal, you can either make sure the signal is blocked in all but the worker thread (so it will be delivered there), or use the signal handler to find out whether the signal was sent to the worker thread, and use pthread_kill to forward it explicitly (the worker thread still doesn't need to do anything in the signal handler).
Due to laziness on my part, I don't have a source code viewer online, but you can clone the LibreVISA git tree, and take a look at src/messagepump.cpp, where this method is used to poke the worker thread after another thread added a file descriptor to the watch list.
Simon Richthers answer is v good.
Another alternative might be to make main thread only responsible for listening for new connections and starting up a worker thread with the connection information so that the worker is responsible for all subsequent ‘transactions’ from this source.
My understanding is:
Main thread uses select.
Worker threads processes requests forwarded to it by main thread.
So need to synchronize between workers and main thread e.g. when
worker finishes a transaction need to send response back to main
thread which in turn forwards the response back to the source.
Why don't you remove the problem of having to synchronize between the worker thread and the main thread by making the worker thread responsible for all transactions from a particular connection?
Thus the main thread is only responsible for listening for new connections and starting up a worker thread with the connection information i.e. the file descriptor for the new connection.
First of all, the way to wake another thread is to use the pthread_cond_wait / pthread_cond_timedwait calls in thread A to wait, and for thread B to use pthread_cond_broadcast / pthread_cond_signal to pick it up. So, for instance if B is a producer and A is the consumer, the producer might add items to a linked list protected with a mutex. There would be an associated conditional variable such that after the addition of the item, it could wake thread B such that it went to see if any new items had arrived on the list, and if so removed them. I say 'associated' as then the same mutex can be associated with the condition variable as protects the list.
So far so good. Now you mention asynchronous I/O. What I've wanted to do several times is select() or poll() on a set of FDs and a set of condition variables, so the select(), poll() is interrupted when the condition variable is broadcasted to. There is no easy way of doing this directly; you cannot simply mix and match.
You thus need to do one of two things. Either:
work around the problem (for instance, use a self-connected pipe() to send one byte to wake the select() up either instead of the condition variable, as well as the condition variable, or from some additional thread waiting on the condition variable; or
convert to a more threaded model. IE use one thread for sending, one thread for receiving, and use a producer / consumer model, so the sender thread simply removes from a list / buffer and sends (blocking if necessary), and the received waits for I/O (blocking if necessary) and adds it to the list (this is what you put in italics at the end).
The second is a major design change for those of us brought up on asynchronous I/O, and the first is ugly. You are not the first to be dismayed by this, but I've not found an easy way around it. Re the first an inefficiency, if you only write one character to wake the select loop to the self-pipe, I don't think you are going to see too much inefficiency.

What is meant by "blocking system call"?

What is the meaning of "blocking system call"?
In my operating systems course, we are studying multithreaded programming. I'm unsure what is meant when I read in my textbook "it can allow another thread to run when a thread make a blocking system call"
A blocking system call is one that must wait until the action can be completed. read() would be a good example - if no input is ready, it'll sit there and wait until some is (provided you haven't set it to non-blocking, of course, in which case it wouldn't be a blocking system call). Obviously, while one thread is waiting on a blocking system call, another thread can be off doing something else.
For a blocking system call, the caller can't do anything until the system call returns. If the system call may be lengthy (e.g. involve file IO or networking IO) this can be a bad thing (e.g. imagine a frustrated user hammering a "Cancel" button in an application that doesn't respond because that thread is blocked waiting for a packet from the network that isn't arriving). To get around that problem (to do useful work while you wait for a blocking system call to return) you can use threads - while one thread is blocked the other thread/s can continue doing useful work.
The alternative is non-blocking system calls. In this case the system call returns (almost) immediately. For lengthy system calls the result of the system call is either sent to the caller later (e.g. as some sort of event or message or signal) or polled by the caller later. This allows you to have a single thread waiting for many different lengthy system calls to complete at the same time; and avoids the hassle of threads (and locking, race conditions, the overhead of thread switches, etc). However, it also increases the hassle involved with getting and handling the system call's results.
It is (almost always) possible to write a non-blocking wrapper around a blocking system call; where the wrapper spawns a thread and returns (almost) immediately, and the spawned thread does the blocking system call and either sends the system call's results to the original caller or stores them where the original caller can poll for them.
It is also (almost always) possible to write a blocking wrapper around a non-blocking system call; where the wrapper does the system call and waits for the results before it returns.
I would suggest having a read on this very short text:
http://files.mkgnu.net/files/upstare/UPSTARE_RELEASE_0-12-8/manual/html-multi/x755.html
In particular you can read there why blocking system calls can be a worry with threads, not just with concurrent processes:
This is particularly problematic for multi-threaded applications since
one thread blocking on a system call may indefinitely delay the update
of the code of another thread.
Hope it helps.
A blocking system call is a system call by means of which any process is requesting some service from the system but that service is not currently available. So that particular system call blocks the process.
If you want to make it clear in context with multi threading you can go through the link...

Linux asynchronous I/O queue

In the context of block devices like a file; are Linux kernel AIO functions like io_submit() only asynchronous within the supplied queue of I/O operations, or are they (also) asynchronous across several prosesses and/or threads that also have queues of I/O operations on the same file.
Doc says: The io_submit() system call queues nr I/O request blocks for
processing in the AIO context ctx_id. The iocbpp argument should be
an array of nr AIO control blocks, which will be submitted to context
ctx_id.
Update:
Example:
If I spawn two threads, both have 100 queued I/O operations on the same file and both call io_submit() at approx. the same time; will all 200 I/O operations be asynchronous or will thread #1's 100 I/O operations only be asynchronous in regards to each other but block thread #2 until all thread #1's I/O operations are done?
The only PART of asynchronous behaviour that your application should care about is within your application. Yes, other processes are likely going to ALSO write data to the disk at some point during the runtime of your application. There is very little you can do to stop that in a multitasking, multiuser and potentially multiprocessor system.
The general idea here is that your application doesn't block, which is the way that read or write [and their more advanced cousins, fread, fwrite, etc).
If you want to stop other processes from touching "your" files, then you need to use file-locking or something similar.
When a set of io requests is submitted with io_submit, the system call returns immediately. From the point of view of the thread emitting the requests, the execution of the commands embedded in the requests is asynchronous. The thread will have to query the OS to know the result, and is free to do what it wants in the mean time.
Now, if two threads happens to emit each a set of requests, they will both fall in the same situation. They will both have to ask the OS about the advancement of their respective IO commands. None of the threads will be blocked.
From the AIO framework point of view, it is entirely possible to make the OS actually execute the requests before returning from the io_submit call for either or all the threads invoking it, but the API remains the same: userland threads will still manipulate the API as an async one, obtaining a token for a future result from the API when it posts its requests, and using that token to get the real result.
In the specific case of linux AIO, the token is the context created beforehand, and the result check syscall is io_getevents, which reports an "event" (ie. a result) for each completed request.
Regarding your example, is it possible that during the second syscall, all the requests of the first thread get completed? I don't see a reason for this never happening at all, but if both threads post 100 requests very close to each other, then it seems very unlikely. A more likely scenario is that several requests of the first thread to call io_submit got completed when the second thread makes its own call to io_submit, but at any rate that call will not block.

Asynchronous File I/O using threads in C

I'm trying to understand how asynchronous file operations being emulated using threads. I've found next-to-nothing materials to read about the subject.
Is it possible that:
a process uses a thread to open a regular file (HDD).
the parent gets the file descriptor from the thread, now it may close the thread.
the parent uses the file descriptor with a new thread, reading X bytes from the file.
the parent gets the file descriptor with the seek-position of the current file state.
the parent may repeat these operations, without the need to open, or seek, every time it wishes to "continue" reading a new chunk of the file?
This is just a wild guess of mine, would appreciate if anybody mind to shed more light to clarify how it's being emulated efficiently.
UPDATE:
By efficient I actually mean that I don't want the thread to "wait" since the moment the file been opened. Think of a HTTP non-blocking daemon which serves a client with a huge file, you want to use the thread to read chunks of the file without blocking the daemon - but you don't want to keep the thread busy while "waiting" for the actual transfer to take place, you want to use the thread for other blocking operations of other clients.
To understand asynchronous I/O better, it may be helpful to think in terms of overlapping operation. That is, the number of pending operations (operations that have been started but not yet completed) can simutaneously go above one.
A diagram that explains asynchronous I/O might look like this: http://msdn.microsoft.com/en-us/library/aa365683(VS.85).aspx
If you are using the asynchronous I/O capabilities provided by the underlying Operating System, then it is possible to asynchronously read from multiple files without spawning a equal number of threads.
If your underlying Operating System does not provide asynchronous I/O, or if you decide not to use it, in other words, you wish to emulate asynchronous operation by only using blocking I/O (the regular Read/Write provided by the Operating System) then it is necessary to spawn as many threads as the number of simutaneous I/O operations. This is because when a thread is making a function call to blocking I/O, the thread cannot continue its execution until the operation finishes. In order to start another blocking I/O operation, that operation has to be issued from another thread that is not already occupied.
When you open/create a file fire up a thread. Now store that thread id/ptr as your file handle.
Basically the thread will do nothing except sit in a loop waiting for an "event". A semaphore would be good here. When you want to do a read then you add the read command to a queue (remember to critical section the stack add), return a unique id, and then you increment the semaphore. If the thread is asleep it will now wake up and grab the first message off the queue and process it. When it has completed you remove the command from the queue.
To poll if a file read has completed you can, simply, check to see if its in the command queue. If its not there then the command has completed.
Furthermore if you want to allow synchronous reads as well then you can wait after sending the message through for an "event" to get triggered by the completion. You then check to see if the unique id is the queue and if it isn't you return control. If it still is then you go back to a wait state until the relevant unique id has been processed.

Resources