How to avoid multi Threading - c

I came across this question and was very impressed by this answer.
I would really like to follow the advices from that answer, but I cannot imagine how to do that. How can I avoid multi threading?
There are often situations, that need to deal with different things concurrently (different hardware resources or networking for example) but at the same time they need to access shared data (like configurations, data to work on, and so on).
How could this be solved single-threaded without using any kinds of huge state-machines or event loops?
I know, that this is a huge topic, which cannot be answered as a whole on a platform like Stackoverflow. I think I really should go reading the advised book from the mentioned answer, but for now I would love to read some input here.
Maybe it is worth noting, that I am interested in solutions in C. Higher languages like Java, C++ and especially frameworks like Qt or similar ones simplify this a lot, but what about pure C?
Any input is very appreciated. Thank you all in advance

You already mentioned event loops, but I still think those provide an excellent alternative to multi-threading for many applications, and also serve as a good base when adding multi-threading later, if warranted.
Say you have an application that needs to handle user input, data received on a socket, timer events, and signals for example:
One multi-threaded design would be to spawn different threads to wait on the different event sources and have them synchronize their actions on some global state as events arrive. This often leads to messy synchronization and termination logic.
A single-threaded design would be to have a unified event loop that receives all types of events and handles them in the same thread as they arrive. On *nix systems, this can be accomplished using e.g. select(2), poll(2), or epoll(7) (the latter Linux-specific). Recent Linux versions also provide signalfd(2), timerfd (timerfd_create(2)), and eventfd(2) for cleanly fitting additional event types into this model, and on other unices you can use various tricks involving e.g. pipe(2)s to signal events. A nice library that abstracts much of this away is libevent, which also works on other platforms.
Besides not having to deal with multi-threading right away, the event loop approach also cleanly lends itself to adding multi-threading later if needed for performance or other reason: You simply let the event handler spawn threads for certain events. Having all event handling in a single location often greatly simplifies application design.
When you do need multiple threads (or processes), it helps to have narrow and well-tested interfaces between them, using e.g. synchronized queues. An alternative design for the event handler would be to have event-generating threads push events to an event queue from which the event handler then reads and dispatches them. This cleanly separates various parts of the program.

Read more about continuations and contination-passing style (and CPS transform).
CPS-transform could be a systematic way to "mimic" multi-threading.
You could have a look into CPC (Continuation Passing C, by Juliusz Chroboczek and Gabriel Kerneis), which is also a source to source C transformer. You could also read old Appel's book: Compiling with Continuations and Queinnec's book Lisp In Small Pieces
Read also more about event loops, callbacks, closures, call stacks, tail calls. These notions are related to your concerns.
See also the (nearly obsolete) setcontext(3) Linux function, and idle functions in event loops, see this.

You can implement concurrent tasks by using coroutines. You then have to explicitly pass control (the cpu) to another coroutine. It won't be done automatically by an interrupt after a small delay.
http://en.wikipedia.org/wiki/Coroutine

Related

What to do instead of async/await in a systems language

I have a concrete problem that in a higher level language I would solve using async/await: we have a blocking system/hardware communication/network call that takes several seconds to complete. We would like to have that happen in the background while we do some other of such calls in parallel.
I have thought of a couple of solutions and there might be better ones than these:
start a thread and signal condition variable/semaphore once it's
done;
provide a callback that is executed when the call finishes
(old JavaScript style);
create your own custom scheduler to
actually mimic async/await.
What's the ideal solution to this in a systems language such as Odin or C?
I would recommend using whatever asynchronous approach is most common in your system / language. I do not recommend using a separate thread, and I do not recommend trying to port a high-level style of asynchronous programming into a lower-level language / platform. You want your consumers using something that feels natural to them, rather than learning a whole new paradigm just to call your API.
If you're on Windows, you should be able to signal a ManualResetEvent on completion. Explicit callbacks would also be acceptable.
I haven't written asynchronous code on Linux, but I suspect adopting libevent or libuv would be the way to go.
If you're exposing an API for others to consume and you want it feeling the most platform-like, I believe you'd have to do that at the driver level. That allows you to fully implement support OVERLAPPED (on Windows) or epoll (on Linux).

Glib Threads vs GMain Loop Eventing

I have a simple system in which there is a GList structure. There are two threads: say Head() that causes ingress of data into the GList structure. Another thread Tail() causes egress of data (and its processing) at tail end of the list.
I was going to implement this originally using pthreads, but the glib documentation itself suggested that instead of threads a main loop with context should be used for attaching sources and dispatching callbacks.
In general it wasn't clear what problems the glib main loop, main context and source system attempts to solve. All I could gather is it that it finds applications in reading socket data, its parallelism with poll() and the UI eventing system.
What is the use case of GlibMainLoop system? In terms of my problem statement is it applicable?
GLib is part of the Gnome project. It was built first and foremost with GUI applications in mind, though it's not limited to that use. Its model for GUI programming is a typical event-based one, driven by a main loop that receives events and dispatches them appropriately to components. You should interpret the documentation in that light.
It sounds like yours is not a GUI application, with its only GLib association being its use of a GList. I find GList a bit of a questionable choice in that context, but not necessarily a wrong one. Choosing GList does not mean you should commit to an event-driven program design, and if you don't then you probably have no use for a GLib main event loop.
Nevertheless, an event-driven design might serve you well, and in some ways it would be simpler than a multithreaded one. Much depends on the details of what your producer and consumer are supposed to do.

Recommended patterns for writing asynchronous evented servers in C

I am writing my first single-threaded, single-process server in C using kqueue() / epoll() to handle asynchronous event dispatch. As one would expect, it is quite a lot harder to follow the flow of control than in a blocking server.
Is there a common pattern (maybe even with a name) that people use to avoid the callback-driven protocol implementation becoming a gigantic tangled hairball?
Alternately, are there any nonblocking servers written in C for which the source code is a pleasure to read?
Any input would be much appreciated!
More thoughts:
A lot of the cruft seems to come from the need to deal with the buffering of IO. There is no necessary correspondence between a buffer filling/draining and a single state transition. A buffer fill/drain might correspond to [0, N] state transitions.)
I've looked at libev (docs here) and it looks like a great tool, and libevent, which looks less exciting but still useful, but neither of them really answers the question: how do I manage the flow of control in a way that isn't horrendously opaque.
You can try using something like State Threads or GNU Portable Threads, which allows you to write as though you were using a single thread per connection (The implementation uses Fibers).
Alternately, you can build your protocol implementation using a state machine generator (such as Ragel).

C HTTP server - multithreading model?

I'm currently writing an HTTP server in C so that I'll learn about C, network programming and HTTP. I've implemented most of the simple stuff, but I'm only handling one connection at a time. Currently, I'm thinking about how to efficiently add multitasking to my project. Here are some of the options I thought about:
Use one thread per connection. Simple but can't handle many connections.
Use non-blocking API calls only and handle everything in one thread. Sounds interesting but using select()s and such excessively is said to be quite slow.
Some other multithreading model, e.g. something complex like lighttpd uses. (Probably) the best solution, but (probably) too difficult to implement.
Any thoughts on this?
There is no single best model for writing multi-tasked network servers. Different platforms have different solutions for high performance (I/O completion ports, epoll, kqueues). Be careful about going for maximum portability: some features are mimicked on other platforms (i.e. select() is available on Windows) and yield very poor performance because they are simply mapped onto some other native model.
Also, there are other models not covered in your list. In particular, the classic UNIX "pre-fork" model.
In all cases, use any form of asynchronous I/O when available. If it isn't, look into non-blocking synchronous I/O. Design your HTTP library around asynchronous streaming of data, but keep the I/O bit out of it. This is much harder than it sounds. It usually implies writing state machines for your protocol interpreter.
That last bit is most important because it will allow you to experiment with different representations. It might even allow you to write a compact core for each platform local, high-performance tools and swap this core from one platform to the other.
Yea, do the one that's interesting to you. When you're done with it, if you're not utterly sick of the project, benchmark it, profile it, and try one of the other techniques. Or, even more interesting, abandon the work, take the learnings, and move on to something completely different.
You could use an event loop as in node.js:
Source code of node (c, c++, javascript)
https://github.com/joyent/node
Ryan Dahl (the creator of node) outlines the reasoning behind the design of node.js, non-blocking io and the event loop as an alternative to multithreading in a webserver.
http://www.yuiblog.com/blog/2010/05/20/video-dahl/
Douglas Crockford discusses the event loop in Scene 6: Loopage (Friday, August 27, 2010)
http://www.yuiblog.com/blog/2010/08/30/yui-theater-douglas-crockford-crockford-on-javascript-scene-6-loopage-52-min/
An index of Douglas Crockford's above talk (if further background information is needed). Doesn't really apply to your question though.
http://yuiblog.com/crockford/
Look at your platforms most efficient socket polling model - epoll (linux), kqueue (freebsd), WSAEventSelect (Windows). Perhaps combine with a thread pool, handle N connections per thread. You could always start with select then replace with a more efficient model once it works.
A simple solution might be having multiple processes: have one process accept connections, and as soon as the connection is established fork and handle the connection in that child process.
An interesting variant of this technique is used by SER/OpenSER/Kamailio SIP proxy: there's one main process that accepts the connections and multiple child worker processes, connected via pipes. The parent sends the new filedescriptor through the socket. See this book excerpt at 17.4.2. Passing File Descriptors over UNIX Domain Sockets. The OpenSER/Kamailio SIP proxies are used for heavy-duty SIP processing where performance is a huge issue and they do very well with this technique (plus shared memory for information sharing). Multi-threading is probably easier to implement, though.

Where can I find benchmarks on different networking architectures?

Where can I find benchmarks on different networking architectures?
I am playing with sockets / threads / forks and I'd like to know what the best is. I was thinking there has got to be a place where someone has already spelled out all the pros and cons of different architectures for a socket service, listed benchmarks with code that runs.
Ultimately I'd like to run these various configurations with my own code and see which runs best in different circumstances.
Many people I talk to say that I should just use single threaded select. But I see an argument for threads when you're storing state information inside the thread to keep code simple. What is the trade off mark for writing my own state structure vs using a proven thread architecture.
I've also been told forking is bad... but when you need 12000 connections on a machine that cannot raise the open file per process limit, forking is an option! Forking is also a nice option for stability when you've got one process that needs restarting, it doesn't disturb the others.
Sorry, this is one of my longer questions... so many variables are left empty.
Thanks,
Chenz
edit: here's the link I was looking for, which is a whole paper answering your question. http://www.kegel.com/c10k.html
There are web servers designed along all three models (fork, thread, select). People like to benchmark web servers.
http://www.lighttpd.net/benchmark
Libevent has some benchmarks and links to stuff about how to choose a select() vs. threaded model, generally in favour of using the libevent model.
http://monkey.org/~provos/libevent/
It's very difficult to answer this question as so much depends on what your service is actually doing. Does it have to query a database? read files from the filesystem? perform complicated calculations? go off and talk to some other service? Also, how long-lived are client connections? Might connections have some semantic interaction with other connections, or are they all treated as independent of each other? Might you want to think about load-balancing your service across multiple servers later? (If so, you might usefully think about that now so that any necessary help can be designed in from the start.)
As you hint, the serving machine might have limits which interact with the various techniques, steering you towards one answer or another. You have a per-process file descriptor limit, but remember that you may also have a fixed size process table! How many concurrent clients are you expecting, anyway?
If your service keeps crashing and you need to keep restarting it or you think you want a multi-process model so that connections are isolated from each other, you're probably doing it wrong. Stability is extremely important in this sort of context, and that means good practice and memory hygiene, both in general and in the face of network-based attacks.
Remember the history... fork() is cheap in the Unix world, but spawning new processes relatively expensive on Windows. OTOH, Windows threads are lightweight, whereas threading has always been a bit alien to Unix and only relatively recently become widespread.

Resources