How Blocking IO Affects A Multithreaded Application/Service In Linux - c

Am exploring with several concepts for a web crawler in C on Linux. To decide if i'll use blocking IO, multiplexed OI, AIO, a certain combination, etc., I esp need to know (I probably should discover it for myself practically via some test code, but for expediency I prefer to know from others) when a call to IO in blocking mode is made, is it the particular thread (assuming a multithreaded app/svc) or the whole process itself that is blocked? Even more specifically, in a multitheaded (POSIX) app/service can a thread dedicated to remote read/writes block the entire process? If so, how can I unblock such a thread without terminating the entire process?
NB: Whether or not I should use blocking/nonblocking is not really the question here.
Kindly

Blocking calls block only the thread that made them, not the entire process.
Whether to use blocking I/O (with one socket per thread) or non-blocking I/O (with each thread managing multiple sockets) is something you are going to have to benchmark. But as a rule of thumb...
Linux handles multiple threads reasonably efficiently. So if you are only handling a few dozen sockets, using one thread for each is easy to code and should perform well. If you are handling hundreds of sockets, it is a closer call. And for thousands of sockets, you are almost certainly better off using one thread (or process) to manage large groups.
In the latter case, for optimal performance you probably want to use epoll, even though it is Linux-specific.

Related

User-level threads context switching: How to detect when a thread is blocking in C?

As the title suggests, is there a way in C to detect when a user-level thread running on top of a kernel-level thread e.g., pthread has blocked (or about to block) for I/O?
My use case is as follows: I need to execute tasks in a multithreaded environment (on top of kernel threads e.g., pthreads). The tasks are basically user functions that can be synchronized and may use blocking operations within. I need to hide latency in my implementation. So, I am exploring the idea of implementing the tasks as user-level threads for better control of their execution context such that, when a task blocks or synchronizes, I context-switch to other ready tasks (i.e., implementing my own scheduler for the user-level threads). Consequently, almost the full use of the OS’s time quantum per kernel thread can be achieved.
There used to be code that did this, for example GNU pth. It's generally been abandoned because it just doesn't work very well and we have much better options now. You have two choices:
1) If you have OS help, you can use the OS mechanisms. Windows provides OS help for this, IOCP dispatching uses it.
2) If you have no OS help, then you have to convert all blocking operations into non-blocking ones that call your dispatcher rather than blocking. So, for example, if someone calls socket, you intercept that call and set the socket non-blocking. When they call read, you intercept that call and if they get a "would block" indication, you arrange to resume when the operation might succeed and schedule another thread.
You can look at GNU pth to see how you might make option 2 work. But be warned, GNU pth is full of reported bugs that have never been fixed since it was abandoned. It will give you an idea of how to implement things like mutexes and sleeps in a cooperative user-space threading environment. But don't actually use the code.

Non-blocking access to the file system

When writing a non-blocking program (handling multiple sockets) which at a certain point needs to open files using open(2), stat(2) files or open directories using opendir(2), how can I ensure that the system calls do not block?
To me it seems that there's no other alternative than using threads or fork(2).
As Mel Nicholson replied, for everything file descriptor based you can use select/poll/epoll. For everything else you can have a proxy thread-per-item (or a thread pool) with the small stack that would convert (by means of the kernel scheduler) any synchronous blocking waits to select/poll/epoll-able asynchronous events using eventfd or a unix pipe (where portability is required).
The proxy thread shall block till the operation completes and then write to the eventfd or to the pipe to wake up the select/poll/epoll.
Indeed there is no other method.
Actually there is another kind of blocking that can't be dealt with other than by threads and that is page faults. Those may happen in program code, program data, memory allocation or data mapped from files. It's almost impossible to avoid them (actually you can lock some pages to memory, but it's privileged operation and would probably backfire by making the kernel do a poor job of memory management somewhere else). So:
You can't really weed out every last chance of blocking for a particular client, so don't bother with the likes of open and stat. The network will probably add larger delays than these functions anyway.
For optimal performance you should have enough threads so some can be scheduled if the others are blocked on page fault or similar difficult blocking point.
Also if you need to read and process or process and write data during handling a network request, it's faster to access the file using memory-mapping, but that's blocking and can't be made non-blocking. So modern network servers tend to stick with the blocking calls for most stuff and simply have enough threads to keep the CPU busy while other threads are waiting for I/O.
The fact that most modern servers are multi-core is another reason why you need multiple threads anyway.
You can use the poll( ) command to check any number of sockets for data using a single thread.
See here for linux details, or man poll for the details on your system.
open( ) and stat( ) will block in the thread they are called from in all POSIX compliant systems unless called via an asynchronous tactic (like in a fork)

Non-blocking native files access - single-threaded daemon in C?

I've found out that native files access has no "non-blocking" state. (I'm correct?)
I've been googling for daemons which are "non-blocking", and I've found one which achieved said behavior by threading file access operations, so that the daemon won't block.
My question is, wouldn't threading and IPC'ing such operations be rather expensive? wouldn't it make more sense to either:
A) Pre-thread pool, simply have each client at a thread and let it block for which ever blocking operations it might need. Or,
B) In case of file access blocking, use a relatively small buffer, that way it's still blocking - but one would assume that a tiny buffer for multiple operations would make more sense than paying the price of threading each operation and IPC it?
If you use threading, little IPC overhead is needed. You have the same memory space for all your threads, so a simple mutex or semaphore may be all you need. Now, if you are blocking on a mutex or semaphore too long or too often, why use async I/O in the first place?
As to the actual computation performed by threads doing I/O, they are waiting for the kernel to wake them up most of the time, so I wouldn't worry.
If your application is going to revolve around reading files and other I/O sources, you may want to read up on Reactor patterns, and event-driven programming.
Also, you mentioned a daemon, and servicing clients. If the service you provide is reading files, the computational cost of spawning a new thread to serve each client is minimal, since each individual thread will take "long" to complete requests, and block most of the time anyway. There may be a memory problem if your client count is in the thousands, but otherwise I think you'll do okay.
Give us a little more detail about what you want to do, maybe there are more straightforward ways.

are posix pipes lightweight?

In a linux application I'm using pipes to pass information between threads.
The idea behind using pipes is that I can wait for multiple pipes at once using poll(2). That works well in practice, and my threads are sleeping most of the time. They only wake up if there is something to do.
In user-space the pipes look just like two file-handles. Now I wonder how much resources such a pipes use on the OS side.
Btw: In my application I only send single bytes every now and then. Think about my pipes as simple message queues that allow me to wake-up receiving threads, tell them to send some status-data or to terminate.
No, I would not consider pipes "lightweight", but that doesn't necessarily mean they're the wrong answer for your application either.
Sending a byte over a pipe is going to require a minimum of 3 system calls (write,poll,read). Using an in-memory queue and pthread operations (mutex_lock, cond_signal) involves much less overhead. Open file descriptors definitely do consume kernel resources; that's why processes are typically limited to 256 open files by default (not that the limit can't be expanded where appropriate).
Still, the pipe/poll solution for inter-thread communication does have advantages too: particularly if you need to wait for input from a combination of sources (network + other threads).
As you are using Linux you can investigate and compare pipe performance with eventfd's. They are technically faster and lighter weight but you'll be very lucky to actually see the gains in practice.
http://www.kernel.org/doc/man-pages/online/pages/man2/eventfd.2.html
Measure and you'll know. Full processes with pipes are sufficiently lightweight for lots of applications. Other applications require something lighter weight, like OS threads (pthreads being the popular choice for many Unix apps), or superlightweight, like a user-level threads package that never goes into kernel mode except to handle I/O. While the only way to know for sure is to measure, pipes are probably good enough for up to a few tens of threads, whereas you probably want user-level threads once you get to a few tens of thousands of threads. Exactly where the boundaries should be drawn using today's codes, I don't know. If I wanted to know, I would measure :-)

questions about multi threading for sockets/tcp-connections

I have a server that connects to multiple clients using TCP/IP connections, using C in Unix. Since it won't have more than 20 connections at a time, I figured I would use a thread per connection/socket. But the problem is writing to the sockets as I'll be sending user prompted msgs to clients. Once each socket is handled by a thread, how do I interact with the created thread to write to the sockets? Should each thread just read from the sockets and I'll write to sockets in the main program? Not sure if that's a good way to go about it.
My rule of thumb is that any given socket should only be operated on by a single thread(*). So if you spawn a separate I/O thread for each socket, and your main thread wants something written to an I/O thread's socket, then the main thread should send that data to the I/O thread, whereupon the I/O thread can write it to the socket.
Of course, this means you need to have a good communications method between the main thread and the I/O thread; which you could do by spawning a socket-pair for each I/O thread and having the I/O threads select()/poll() on their end of the socket-pair (to handle data coming from the main thread) as well as on their network socket.
But once you've done that, you're dealing with complexity of using select()/poll() AND multithreading, which is a lot of complexity overhead. So unless you absolutely need multithreading for some reason, I agree with the previous posters -- it's better to just handle all the sockets in a single thread, via select() or poll().
(*) It's possible to have multiple threads reading/writing to the same socket at the same time, but it's error-prone. In particular, startup and shutdown sequences can be tricky to get 100% right. That's why I try to avoid 'sharing' a given socket amongst multiple threads.
Sounds like you'd probably be better with a single thread and multiplexing the sockets (using select, poll etc). This will avoid the race conditions and locking requirements which will otherwise make the program more difficult to write.
Unless you are doing significant processor-intensive work, or waiting for IO on behalf of these clients, you'll get no benefit from using threads anyway, but the race conditions will still be there.
So I'd say, get a working implementation using a single thread, THEN if in performance testing you discover that it is lacking, refactor it to use multithreading if that seems like the best option to beat the performance problems (of course you'll be profiling it etc).
Having the main thread write to the sockets is fine, you only need to worry about having multiple threads writing to a socket at the same time.
However, I'd test the performance of using a single thread and select/poll before bothering with the muti threaded approach.

Resources