Recommended patterns for writing asynchronous evented servers in C - c

I am writing my first single-threaded, single-process server in C using kqueue() / epoll() to handle asynchronous event dispatch. As one would expect, it is quite a lot harder to follow the flow of control than in a blocking server.
Is there a common pattern (maybe even with a name) that people use to avoid the callback-driven protocol implementation becoming a gigantic tangled hairball?
Alternately, are there any nonblocking servers written in C for which the source code is a pleasure to read?
Any input would be much appreciated!
More thoughts:
A lot of the cruft seems to come from the need to deal with the buffering of IO. There is no necessary correspondence between a buffer filling/draining and a single state transition. A buffer fill/drain might correspond to [0, N] state transitions.)
I've looked at libev (docs here) and it looks like a great tool, and libevent, which looks less exciting but still useful, but neither of them really answers the question: how do I manage the flow of control in a way that isn't horrendously opaque.

You can try using something like State Threads or GNU Portable Threads, which allows you to write as though you were using a single thread per connection (The implementation uses Fibers).
Alternately, you can build your protocol implementation using a state machine generator (such as Ragel).

Related

What to do instead of async/await in a systems language

I have a concrete problem that in a higher level language I would solve using async/await: we have a blocking system/hardware communication/network call that takes several seconds to complete. We would like to have that happen in the background while we do some other of such calls in parallel.
I have thought of a couple of solutions and there might be better ones than these:
start a thread and signal condition variable/semaphore once it's
done;
provide a callback that is executed when the call finishes
(old JavaScript style);
create your own custom scheduler to
actually mimic async/await.
What's the ideal solution to this in a systems language such as Odin or C?
I would recommend using whatever asynchronous approach is most common in your system / language. I do not recommend using a separate thread, and I do not recommend trying to port a high-level style of asynchronous programming into a lower-level language / platform. You want your consumers using something that feels natural to them, rather than learning a whole new paradigm just to call your API.
If you're on Windows, you should be able to signal a ManualResetEvent on completion. Explicit callbacks would also be acceptable.
I haven't written asynchronous code on Linux, but I suspect adopting libevent or libuv would be the way to go.
If you're exposing an API for others to consume and you want it feeling the most platform-like, I believe you'd have to do that at the driver level. That allows you to fully implement support OVERLAPPED (on Windows) or epoll (on Linux).

How to avoid multi Threading

I came across this question and was very impressed by this answer.
I would really like to follow the advices from that answer, but I cannot imagine how to do that. How can I avoid multi threading?
There are often situations, that need to deal with different things concurrently (different hardware resources or networking for example) but at the same time they need to access shared data (like configurations, data to work on, and so on).
How could this be solved single-threaded without using any kinds of huge state-machines or event loops?
I know, that this is a huge topic, which cannot be answered as a whole on a platform like Stackoverflow. I think I really should go reading the advised book from the mentioned answer, but for now I would love to read some input here.
Maybe it is worth noting, that I am interested in solutions in C. Higher languages like Java, C++ and especially frameworks like Qt or similar ones simplify this a lot, but what about pure C?
Any input is very appreciated. Thank you all in advance
You already mentioned event loops, but I still think those provide an excellent alternative to multi-threading for many applications, and also serve as a good base when adding multi-threading later, if warranted.
Say you have an application that needs to handle user input, data received on a socket, timer events, and signals for example:
One multi-threaded design would be to spawn different threads to wait on the different event sources and have them synchronize their actions on some global state as events arrive. This often leads to messy synchronization and termination logic.
A single-threaded design would be to have a unified event loop that receives all types of events and handles them in the same thread as they arrive. On *nix systems, this can be accomplished using e.g. select(2), poll(2), or epoll(7) (the latter Linux-specific). Recent Linux versions also provide signalfd(2), timerfd (timerfd_create(2)), and eventfd(2) for cleanly fitting additional event types into this model, and on other unices you can use various tricks involving e.g. pipe(2)s to signal events. A nice library that abstracts much of this away is libevent, which also works on other platforms.
Besides not having to deal with multi-threading right away, the event loop approach also cleanly lends itself to adding multi-threading later if needed for performance or other reason: You simply let the event handler spawn threads for certain events. Having all event handling in a single location often greatly simplifies application design.
When you do need multiple threads (or processes), it helps to have narrow and well-tested interfaces between them, using e.g. synchronized queues. An alternative design for the event handler would be to have event-generating threads push events to an event queue from which the event handler then reads and dispatches them. This cleanly separates various parts of the program.
Read more about continuations and contination-passing style (and CPS transform).
CPS-transform could be a systematic way to "mimic" multi-threading.
You could have a look into CPC (Continuation Passing C, by Juliusz Chroboczek and Gabriel Kerneis), which is also a source to source C transformer. You could also read old Appel's book: Compiling with Continuations and Queinnec's book Lisp In Small Pieces
Read also more about event loops, callbacks, closures, call stacks, tail calls. These notions are related to your concerns.
See also the (nearly obsolete) setcontext(3) Linux function, and idle functions in event loops, see this.
You can implement concurrent tasks by using coroutines. You then have to explicitly pass control (the cpu) to another coroutine. It won't be done automatically by an interrupt after a small delay.
http://en.wikipedia.org/wiki/Coroutine

Whats the advantages and disadvantages of using Socket in IPC

I have been asked this question in some recent interviews,Whats the advantages and disadvantages of using Socket in IPC when there are other ways to perform IPC.Have not found exact answer .
Any help would be much appreciated.
Compared to pipes, IPC sockets differ by being bidirectional, that is, reads and writes can be done on the same descriptor. Pipes, unlike sockets, are unidirectional. You have to keep a pair of descriptors if you want to do both reads and writes.
Pipes, on the other hand, guarantee atomicity when reading or writing under a certain amount of bytes. Writing something less than PIPE_BUF bytes at once is guaranteed to be delivered in one chunk and never observed partial. Sockets do require more care from the programmer in that respect.
Shared memory, when used for IPC, requires explicit synchronisation from the programmer. It may be the most efficient and most flexible mechanism, but that comes at an increased complexity cost.
Another point in favour of sockets: an app using sockets can be easily distributed - ie. it can be run on one host or spread across several hosts with little effort. This depends of course on the nature of the app.
Perhaps this is too simplified an answer, yet it is an important detail. Sockets are not supported on all OS's. Recently, I have been aware of a project that used sockets for IPC all over the place only to find that they were forced to change from Linux to a proprietary OS which was POSIX, but did not support sockets the same way as Linux.
Sockets allow you a few benefits...
You can connect a simple client to them for testing (manually enter data, see the response).
This is very useful for debugging, simulating and blackbox testing.
You can run the processes on different machines. This can be useful for scalability and is very helpful in debugging / testing if you work in embedded software.
It becomes very easy to expose your process as a service
But there are drawbacks as well
Overhead is greater than IPC optimized for a single machine. Shared memory in particular is better if you need the performance, and you know your processes are all on the same machine.
Security - if your client apps can connect so can anyone else, if you're not careful about authentication. Data can also be sniffed if you're not encrypting, and modified if you're not at least signing data sent over the wire.
Using a true message queue tends to leave you with fixed sized messages. If you have a large number of messages of wildly varying sizes this can become a performance problem. Using a socket can be a way around this, though you're then left trying to wrap this functionality to become identical to a queue, which is tricky to get the detail right on, particularly aspects like blocking/non-blocking and atomicity.
Shared memory is quick but requires management (you end up writing a version of malloc to manage the SHM) plus you have to synchronise and lock it in some way. Though you can use libraries to help with this the availability depends on your environment and language.
Queues are easy but have the downsides listed as pros to my socket discussion.
Pipes have been covered by Blagovests answer to this question.
As is ever the case with this kind of stuff I would suggest reading the W. Richard Stevens books on IPC and sockets. There is no better explanation than his! :-)

C HTTP server - multithreading model?

I'm currently writing an HTTP server in C so that I'll learn about C, network programming and HTTP. I've implemented most of the simple stuff, but I'm only handling one connection at a time. Currently, I'm thinking about how to efficiently add multitasking to my project. Here are some of the options I thought about:
Use one thread per connection. Simple but can't handle many connections.
Use non-blocking API calls only and handle everything in one thread. Sounds interesting but using select()s and such excessively is said to be quite slow.
Some other multithreading model, e.g. something complex like lighttpd uses. (Probably) the best solution, but (probably) too difficult to implement.
Any thoughts on this?
There is no single best model for writing multi-tasked network servers. Different platforms have different solutions for high performance (I/O completion ports, epoll, kqueues). Be careful about going for maximum portability: some features are mimicked on other platforms (i.e. select() is available on Windows) and yield very poor performance because they are simply mapped onto some other native model.
Also, there are other models not covered in your list. In particular, the classic UNIX "pre-fork" model.
In all cases, use any form of asynchronous I/O when available. If it isn't, look into non-blocking synchronous I/O. Design your HTTP library around asynchronous streaming of data, but keep the I/O bit out of it. This is much harder than it sounds. It usually implies writing state machines for your protocol interpreter.
That last bit is most important because it will allow you to experiment with different representations. It might even allow you to write a compact core for each platform local, high-performance tools and swap this core from one platform to the other.
Yea, do the one that's interesting to you. When you're done with it, if you're not utterly sick of the project, benchmark it, profile it, and try one of the other techniques. Or, even more interesting, abandon the work, take the learnings, and move on to something completely different.
You could use an event loop as in node.js:
Source code of node (c, c++, javascript)
https://github.com/joyent/node
Ryan Dahl (the creator of node) outlines the reasoning behind the design of node.js, non-blocking io and the event loop as an alternative to multithreading in a webserver.
http://www.yuiblog.com/blog/2010/05/20/video-dahl/
Douglas Crockford discusses the event loop in Scene 6: Loopage (Friday, August 27, 2010)
http://www.yuiblog.com/blog/2010/08/30/yui-theater-douglas-crockford-crockford-on-javascript-scene-6-loopage-52-min/
An index of Douglas Crockford's above talk (if further background information is needed). Doesn't really apply to your question though.
http://yuiblog.com/crockford/
Look at your platforms most efficient socket polling model - epoll (linux), kqueue (freebsd), WSAEventSelect (Windows). Perhaps combine with a thread pool, handle N connections per thread. You could always start with select then replace with a more efficient model once it works.
A simple solution might be having multiple processes: have one process accept connections, and as soon as the connection is established fork and handle the connection in that child process.
An interesting variant of this technique is used by SER/OpenSER/Kamailio SIP proxy: there's one main process that accepts the connections and multiple child worker processes, connected via pipes. The parent sends the new filedescriptor through the socket. See this book excerpt at 17.4.2. Passing File Descriptors over UNIX Domain Sockets. The OpenSER/Kamailio SIP proxies are used for heavy-duty SIP processing where performance is a huge issue and they do very well with this technique (plus shared memory for information sharing). Multi-threading is probably easier to implement, though.

Sending calls to libraries remotely across linux

I am developing some experimental setup in C.
I am exploring a scenario as follows and I need help to understand it.
I have a system A which has a lot of Applications using cryptographic algorithms.
But these crypto calls(openssl calls) should be sent to another System B which takes care of cryptography.
Therefore, I have to send any calls to cryptographic (openssl) engines via socket to a remote system(B) which has openssl support.
My plan is to have a small socket prog on System A which forwards these calls to system B.
What I'm still unclear at this moment is how I handle the received commands at System B.
Do I actually get these commands and translate them into corresponding calls to openssl locally in my system? This means I have to program whatever is done on System A right?
Or is there a way to tunnel/send these raw lines of code to the openssl libs directly and just received the result and then resend to System A
How do you think I should go about the problem?
PS: Oh by the way, the calls to cryptography(like EngineUpdate, VerifyFinal etc or Digest on System A can be either on Java or C.. I already wrote a Java/C program to send these commands to System B via sockets...
The problem is only on System B and how I have to handle..
You could use sockets on B, but that means you need to define a protocol for that. Or you use RPC (remote procedure calls).
Examples for socket programming can be found here.
RPC is explained here.
The easiest (not to say "the easy", but still) way I can imagine would be to:
Write wrapper (proxy) versions of the libraries you want to make remote.
Write a server program that listens to calls, performs them using the real local libraries, and sends the result back.
Preload the proxy library before running any application where you want to do this.
Of course, there are many many problems with this approach:
It's not exactly trivial to define a serializing protocol for generic C function calls.
It's not exactly trivial to write the server, either.
Applications will slow a lot, since the proxy call needs to be synchronous.
What about security of the data on the network?
UPDATE:
As requested in a comment, I'll try to expand a bit. By "wrapper" I mean a new library, that has the same API as another one, but does not in fact contain the same code. Instead, the wrapper library will contain code to serialize the arguments, call the server, wait for a response, de-serialize the result(s), and present them to the calling program as if nothing happened.
Since this involves a lot of tedious, repetitive and error-prone code, it's probably best to abstract it by making it code-driven. The best would be to use the original library's header file to define the serialization needed, but that (of course) requires quite heavy C parsing. Failing that, you might start bottom-up and make a custom language to describe the calls, and then use that to generate the serialization, de-serialization, and proxy code.
On Linux systems, you can control the dynamic linker so that it loads your proxy library instead of the "real" library. You could of course also replace (on disk) the real library with the proxy, but that will break all applications that use it if the server is not working, which seems very risky.
So you basically have two choices, each outlined by unwind and ammoQ respectively:
(1) Write a server and do the socket/protocol work etc., yourself. You can minimize some of the pain by using solutions like Google's protocol buffers.
(2) use an existing middleware solution like (a) message queues or (b) an RPC mechanism like CORBA and its many alternatives
Either is probably more work than you anticipated. So really you have to answer this yourself. How serious is your project? How varied is your hardware? How likely is the hardware and software configuration to change in the future?
If this is more than a learning or pet project you are going to be bored with in a month or two then an existing middleware solution is probably the way to go. The downside is there is a somewhat intimidating learning curve.
You can go the RPC route with CORBA, ICE, or whatever the Java solutions are these days (RMI? EJB?), and a bunch of others. This is an elegant solution since your calls to the remote encryption machine appear to your SystemA as simple function calls and the middleware handles the data issues and sockets. But you aren't going to learn them in a weekend.
Personally I would look to see if a message queue solution like AMQP would work for you first. There is less of a learning curve than RPC.

Resources