I have an application that I'm working on that requires a couple of secondary threads, and each will be responsible for a number of file handles (at least 1, upwards of 10). The file handles are not shared amongst the threads, so I don't have to worry about one secondary thread blocking the other when selecting to see what is ready to read/write. What I want to be sure of is that neither of the secondary threads will cause the main thread to stop executing while the select/pselect call is executing.
I would imagine that this is not a problem - one would imagine that such things would be done in, say, a web server - but I couldn't find anything that specifically said "yes, you can do this" when I Googled. Am I correct in my assumption that this will not cause any problems?
For clarification, what I have looks something like:
Main thread of execution ( select() loop handling incoming command messages and outgoing responses )
Secondary thread #1 ( select() loop providing a service )
Secondary thread #2 ( select() loop providing another service )
As I previously mentioned, none of the file handles are shared amongst the threads - they are created, used, and destroyed within an individual thread, with the other threads ignorant of their existence.
No you don't have to worry about them blocking the main thread. I have used select in multiple threads in various projects. As long as they have distinct FDSETS then you're fine and each one can be used like an independent event loop.
Isn't select supposed to block the whole process?
Have you tried to set the nonblocking mode on the socket?
Also, see select_tut manpage for some help.
Here's a relevant section from the select_tut manpage:
So what is the point of select()? Can't I just read and write to my descriptors whenever I want? The point of select() is that it watches multiple descriptors at the same time and properly puts the process to sleep if there is no activity.
Related
I'm experimenting with a fictional server/client application where the client side launches request threads by a (possibly very large) period of time, with small in-between delays. Each request thread writes on the 'public' fifo (known by all client and server threads) the contents of the request, and receives the server answer in a 'private' fifo that is created by the server with a name that is implicitly known (in my case, it's 'tmp/processId.threadId').
The public fifo is opened once in the main (request thread spawner) thread so that all request threads may write to it.
Since I don't care about the return value of my request threads and I can't make sure how many request threads I create (so that I store their ids and join them later), I opted to create the threads in a detached state, exit the main thread when the specified timeout expires and let the already spawned threads live on their own.
All of this is fine, however, I'm not closing the public fifo anywhere after all spawned request threads finish: after all, I did exit the main thread without waiting. Is this a small kind of disaster, in which case I absolutely need to count the active threads (perhaps with a condition variable) and close the fifo when it's 0? Should I just accept that the file is not explicitly getting closed, and let the OS do it?
All of this is fine, however, I'm not closing the public fifo anywhere
after all spawned request threads finish: after all, I did exit the
main thread without waiting. Is this a small kind of disaster, in
which case I absolutely need to count the active threads (perhaps with
a condition variable) and close the fifo when it's 0? Should I just
accept that the file is not explicitly getting closed, and let the OS
do it?
Supposing that you genuinely mean a FIFO, such as might be created via mkfifo(), no, it's not a particular issue that the process does not explicitly close it. If any open handles on it remain when the process terminates, they will be closed. Depending on the nature of the termination, it might be that pending data are not flushed, but that is of no consequence if the FIFO is used only for communication among the threads of one process.
But it possibly is an issue that the process does not remove the FIFO. A FIFO has filesystem persistence. Once you create one, it lives until it no longer has any links to the filesystem and is no longer open in any process (like any other file). Merely closing it does not cause it to be removed. Aside from leaving clutter on your filesystem, this might cause issues for concurrent or future runs of the program.
If indeed you are using your FIFOs only for communication among the threads of a single process, then you would probably be better served by pipes.
I managed to solve this issue setting up a cleanup rotine with atexit, which is called when the process terminates, ie. all threads finish their work.
I do understand what an APC is, how it works, and how Windows uses it, but I don't understand when I (as a programmer) should use QueueUserAPC instead of, say, a fiber, or thread pool thread.
When should I choose to use QueueUserAPC, and why?
QueueUserAPC is a neat tool that can often be a shortcut for some tasks that are otherwise handled with synchronization objects. It allows you to tell a particular thread to do something whenever it is convenient for that thread (i.e. when it finishes its current work and starts waiting on something).
Let's say you have a main thread and a worker thread. The worker thread opens a socket to a file server and starts downloading a 10GB file by calling recv() in a loop. The main thread wants to have the worker thread do something else in its downtime while it is waiting for net packets; it can queue a function to be run on the worker while it would otherwise be waiting and doing nothing.
You have to be careful with APCs, because as in the scenario I mentioned you would not want to make another blocking WinSock call (which would result in undefined behavior). You really have to be watching in order to find any good uses of this functionality because you can do the same thing in other ways. For example, by having the other thread check an event every time it is about to go to sleep, rather than giving it a function to run while it is waiting. Obviously the APC would be simpler in this scenario.
It is like when you have a call desk employee sitting and waiting for phone calls, and you give that person little tasks to do during their downtime. "Here, solve this Rubik's cube while you're waiting." Although, when a phone call comes in, the person would not put down the Rubik's cube to answer the phone (the APC has to return before the thread can go back to waiting).
QueueUserAPC is also useful if there is a single thread (Thread A) that is in charge of some data structure, and you want to perform some operation on the data structure from another thread (Thread B), but you don't want to have the synchronization overhead / complexity of trying to share that data between two threads. By having Thread B queue the operation to run on Thread A, which solely maintains that structure, you are executing any arbitrary function you want on that data without having to worry about synchronization.
It is just another tool like a thread pool. However with a thread pool you cannot send a task to a particular thread. You have no control over where the work is done. When you queue up a task that may end up creating a whole new thread. You may queue two tasks and they get done simultaneously on two different threads. With QueueUserAPC, you can be guaranteed that the tasks would get done in order and on the thread you designate.
In my application, I'm starting to use the select() call in multiple locations, monitoring different things in my process (network connections, IPCs, messaging, files...).
All the calls are using their own set of file descriptors, meaning no descriptor is used twice across selects.
This means that some times, I have something like 5 select() calls blocking in different threads.
Is there a performance penalty of using select() multiple times in different threads, instead of, maybe, using only one call and dispatching results to the corresponding threads?
Actually, is there a limit to the number of select() calls pending at one time?
Is there a tool to measure this?
Since the application is likely to grow even bigger, I suspect that at some point, if this begins to get problematic, I'll have to code some kind of centralized select() which gathers all FDs to monitor and notifies client threads when data is ready to be gathered/written.
So I figured I'd better ask before...
There is unlikely to be any performance difference you will notice.
Inside the kernel, select adds your thread to a "wait queue" for each descriptor on which you are selecting and puts it to sleep. If you select on n descriptors, your thread gets added to n wait queues. When something pollable happens to the descriptor (e.g., data arrives on the socket), all threads on that wait queue get woken up.
Selecting on a huge number of descriptors would add you to a huge number of wait queues. Once woken up, your thread will have to be removed from all of those wait queues, including the ones on which there was no activity. So on this side, there might be some slight benefit to waiting on a small set of descriptors in multiple threads rather than a huge set of descriptors in one thread.
On the other hand, select itself requires that the kernel loop through all possible descriptors to see which ones are a member of your fd_set. So on this side, there might be some slight advantage to having just one thread making the select call...
Overall I would guess it is a wash.
If you are going to deal with a lot of descriptors, you are better off using a more scalable (albeit non-portable) mechanism like epoll. With epoll, multiple threads each handling a pool of descriptors should scale very nicely.
The select call should be handled in the OS, and as you said - its blocking, not polling - so it won't cause a performance penalty on your application. I also do not believe there will be any limits on using it aside from any limits on the amount of file descriptors your OS can have open in general, which has nothing to do with select.
I'm trying to understand how asynchronous file operations being emulated using threads. I've found next-to-nothing materials to read about the subject.
Is it possible that:
a process uses a thread to open a regular file (HDD).
the parent gets the file descriptor from the thread, now it may close the thread.
the parent uses the file descriptor with a new thread, reading X bytes from the file.
the parent gets the file descriptor with the seek-position of the current file state.
the parent may repeat these operations, without the need to open, or seek, every time it wishes to "continue" reading a new chunk of the file?
This is just a wild guess of mine, would appreciate if anybody mind to shed more light to clarify how it's being emulated efficiently.
UPDATE:
By efficient I actually mean that I don't want the thread to "wait" since the moment the file been opened. Think of a HTTP non-blocking daemon which serves a client with a huge file, you want to use the thread to read chunks of the file without blocking the daemon - but you don't want to keep the thread busy while "waiting" for the actual transfer to take place, you want to use the thread for other blocking operations of other clients.
To understand asynchronous I/O better, it may be helpful to think in terms of overlapping operation. That is, the number of pending operations (operations that have been started but not yet completed) can simutaneously go above one.
A diagram that explains asynchronous I/O might look like this: http://msdn.microsoft.com/en-us/library/aa365683(VS.85).aspx
If you are using the asynchronous I/O capabilities provided by the underlying Operating System, then it is possible to asynchronously read from multiple files without spawning a equal number of threads.
If your underlying Operating System does not provide asynchronous I/O, or if you decide not to use it, in other words, you wish to emulate asynchronous operation by only using blocking I/O (the regular Read/Write provided by the Operating System) then it is necessary to spawn as many threads as the number of simutaneous I/O operations. This is because when a thread is making a function call to blocking I/O, the thread cannot continue its execution until the operation finishes. In order to start another blocking I/O operation, that operation has to be issued from another thread that is not already occupied.
When you open/create a file fire up a thread. Now store that thread id/ptr as your file handle.
Basically the thread will do nothing except sit in a loop waiting for an "event". A semaphore would be good here. When you want to do a read then you add the read command to a queue (remember to critical section the stack add), return a unique id, and then you increment the semaphore. If the thread is asleep it will now wake up and grab the first message off the queue and process it. When it has completed you remove the command from the queue.
To poll if a file read has completed you can, simply, check to see if its in the command queue. If its not there then the command has completed.
Furthermore if you want to allow synchronous reads as well then you can wait after sending the message through for an "event" to get triggered by the completion. You then check to see if the unique id is the queue and if it isn't you return control. If it still is then you go back to a wait state until the relevant unique id has been processed.
I have a worker thread that is listening to a TCP socket for incoming traffic, and buffering the received data for the main thread to access (let's call this socket A). However, the worker thread also has to do some regular operations (say, once per second), even if there is no data coming in. Therefore, I use select() with a timeout, so that I don't need to keep polling. (Note that calling receive() on a non-blocking socket and then sleeping for a second is not good: the incoming data should be immediately available for the main thread, even though the main thread might not always be able to process it right away, hence the need for buffering.)
Now, I also need to be able to signal the worker thread to do some other stuff immediately; from the main thread, I need to make the worker thread's select() return right away. For now, I have solved this as follows (approach basically adopted from here and here):
At program startup, the worker thread creates for this purpose an additional socket of the datagram (UDP) type, and binds it to some random port (let's call this socket B). Likewise, the main thread creates a datagram socket for sending. In its call to select(), the worker thread now lists both A and B in the fd_set. When the main thread needs to signal, it sendto()'s a couple of bytes to the corresponding port on localhost. Back in the worker thread, if B remains in the fd_set after select() returns, then recvfrom() is called and the bytes received are simply ignored.
This seems to work very well, but I can't say I like the solution, mainly as it requires binding an extra port for B, and also because it adds several additional socket API calls which may fail I guess – and I don't really feel like figuring out the appropriate action for each of the cases.
I think ideally, I would like to call some function which takes A as input, and does nothing except makes select() return right away. However, I don't know such a function. (I guess I could for example shutdown() the socket, but the side effects are not really acceptable :)
If this is not possible, the second best option would be creating a B which is much dummier than a real UDP socket, and doesn't really require allocating any limited resources (beyond a reasonable amount of memory). I guess Unix domain sockets would do exactly this, but: the solution should not be much less cross-platform than what I currently have, though some moderate amount of #ifdef stuff is fine. (I am targeting mainly for Windows and Linux – and writing C++ by the way.)
Please don't suggest refactoring to get rid of the two separate threads. This design is necessary because the main thread may be blocked for extended periods (e.g., doing some intensive computation – and I can't start periodically calling receive() from the innermost loop of calculation), and in the meanwhile, someone needs to buffer the incoming data (and due to reasons beyond what I can control, it cannot be the sender).
Now that I was writing this, I realized that someone is definitely going to reply simply "Boost.Asio", so I just had my first look at it... Couldn't find an obvious solution, though. Do note that I also cannot (easily) affect how socket A is created, but I should be able to let other objects wrap it, if necessary.
You are almost there. Use a "self-pipe" trick. Open a pipe, add it to your select() read and write fd_set, write to it from main thread to unblock a worker thread. It is portable across POSIX systems.
I have seen a variant of similar technique for Windows in one system (in fact used together with the method above, separated by #ifdef WIN32). Unblocking can be achieved by adding a dummy (unbound) datagram socket to fd_set and then closing it. The downside is that, of course, you have to re-open it every time.
However, in the aforementioned system, both of these methods are used rather sparingly, and for unexpected events (e.g., signals, termination requests). Preferred method is still a variable timeout to select(), depending on how soon something is scheduled for a worker thread.
Using a pipe rather than socket is a bit cleaner, as there is no possibility for another process to get hold of it and mess things up.
Using a UDP socket definitely creates the potential for stray packets to come in and interfere.
An anonymous pipe will never be available to any other process (unless you give it to it).
You could also use signals, but in a multithreaded program you'll want to make sure that all threads except for the one you want have that signal masked.
On unix it will be straightforward with using a pipe. If you are on windows and want to keep using the select statement to keep your code compatible with unix, the trick to create an unbound UDP socket and close it, works well and easy. But you have to make it multi-threadsafe.
The only way I found to make this multi-threadsafe is to close and recreate the socket in the same thread as the select statement is running. Of course this is difficult if the thread is blocking on the select. And then comes in the windows call QueueUserAPC. When windows is blocking in the select statement, the thread can handle Asynchronous Procedure Calls. You can schedule this from a different thread using QueueUserAPC. Windows interrupts the select, executes your function in the same thread, and continues with the select statement. You can now in your APC method close the socket and recreate it. Guaranteed thread safe and you will never loose a signal.
To be simple:
a global var saves the socket handle, then close the global socket, the select() will return immediately: closesocket(g_socket);