I have implemented TCp concurrent server and client in c using threads as well as forks.
But I don't have any way to check whether there is any other standard way of implementing
this.
I have goggled for standard coding stuff but didnt find anything useful.
Can someone pl z share some good links or code so that I can have a standard idea of
implementing Concurrent servers.
Thanks for help
There's no "standard idea". Your approach is going to depend on requirements, performance, scalability, and amount of time allowed for development.
One thread per client
Possibly with a threadpool
Multi-threaded pipeline model, with N workers
One thread per server, using poll/select
One thread per server, event-based with callbacks
Forking children, one per client connection
pre-forking children, e.g. Apache web server
Etc. All of these have their uses.
Some good links for you:
Beej's Guide to Network Programming will help you get the basics down,
The C10K problem will give you an overview of the design landscape,
High-Performance Server Architecture will make you re-think the "standard" approaches.
Hope this helps.
Concurrent servers (as long as they are quite simple, and performance is not much of an issue) are often created with poll() or select(). Now, this is assuming you are on *nix.
If you can use C++, the boost libraries have ASIO , which is a cross-platform library that allows you to write once and compile everywhere. There isn't really a standard way to do things since the ideas vary from OS to OS.
Related
I have one confusion relating to IOCP (among so many others) as I am fairly new to multi-threaded programming. Can we not use IOCP without having to associate them with a device like a file/port/namedpipe etc.?
What I really want to do is to build a library that implements a Thread pool which caters to Queues with different priorities. Anyone using my library should be able to pass any function and any parameters - these may not just be reads and writes of IOs. IOCP seems to be the best for efficient management of a threadpool except that I need to associate a device with it. What if I am not working with files or communication over network? Maybe I just need different clients to perform different functions using thread pool, perhaps on the same machine? How can I workaround that?
Any hints, ideas, pointers will be much appreciated. Thanks for your help in advance, and kindly excuse my ignorance on this topic.
I'm developing a program that will need to run on Internet servers (a back-end component to be used by several cross-platform programs). I'm familiar with the security precautions to take (to prevent buffer overflows and SQL Injection attacks, for instance), but have never written a server program before, or any program that will be used on this scale.
The program needs to be able to serve hundreds or thousands of clients simultaneously. The protocols are designed for processing speed and to minimize the amount of data that must be exchanged, and the server side will be written in C. There will be both a Windows and a Linux version from the same code.
Questions:
How should the program handle communications -- multiple threads, a single thread handling all the sockets in turn, or spawn a new process for every so many incoming connections (or for each one)?
Do I need to worry about things like memory fragmentation, since this program will need to run for months at a time?
What other design issues, specific to this kind of programming, might an experienced developer of cross-platform programs for desktop and mobile systems not be aware of?
Please, no suggestions to use a different language. That decision has already been made, for reasons I'm not at liberty to go into.
For I'd use libevent or libev and non-blocking I/O. This way the operating system will take case of most of your scheduling problems. I'd also use a thread pool for processing tasks, that by nature are blocking, so they don't block the main loop. And if you ever need to read or write large amounts of data to or from the disc, use mmap, again to let the OS handle as much as possible.
The basic advice is use the OS, as much as possible. If you want a good example of a program which does this look at Varnish, it is very well written, and performs fantastic.
With my experience running multiple servers for over 3 years of uptime, and programs with little over a year of uptime I can still recommend making the setup so that the system gracefully recovers from a program error and from a server reboot.
Even though performance gets a hit when a program is restarted, you need to be able to handle that as external circumstances can force the program to such a restart.
Don't try to reinvent the wheel when not needed, and have a look at zeromq or something like that to handle distribution of incoming communications. (If you are allowed to, prototype the backends in a more forgiving language than C like Python, then reimplement in C but keeping the communications protocol)
I have been asked this question in some recent interviews,Whats the advantages and disadvantages of using Socket in IPC when there are other ways to perform IPC.Have not found exact answer .
Any help would be much appreciated.
Compared to pipes, IPC sockets differ by being bidirectional, that is, reads and writes can be done on the same descriptor. Pipes, unlike sockets, are unidirectional. You have to keep a pair of descriptors if you want to do both reads and writes.
Pipes, on the other hand, guarantee atomicity when reading or writing under a certain amount of bytes. Writing something less than PIPE_BUF bytes at once is guaranteed to be delivered in one chunk and never observed partial. Sockets do require more care from the programmer in that respect.
Shared memory, when used for IPC, requires explicit synchronisation from the programmer. It may be the most efficient and most flexible mechanism, but that comes at an increased complexity cost.
Another point in favour of sockets: an app using sockets can be easily distributed - ie. it can be run on one host or spread across several hosts with little effort. This depends of course on the nature of the app.
Perhaps this is too simplified an answer, yet it is an important detail. Sockets are not supported on all OS's. Recently, I have been aware of a project that used sockets for IPC all over the place only to find that they were forced to change from Linux to a proprietary OS which was POSIX, but did not support sockets the same way as Linux.
Sockets allow you a few benefits...
You can connect a simple client to them for testing (manually enter data, see the response).
This is very useful for debugging, simulating and blackbox testing.
You can run the processes on different machines. This can be useful for scalability and is very helpful in debugging / testing if you work in embedded software.
It becomes very easy to expose your process as a service
But there are drawbacks as well
Overhead is greater than IPC optimized for a single machine. Shared memory in particular is better if you need the performance, and you know your processes are all on the same machine.
Security - if your client apps can connect so can anyone else, if you're not careful about authentication. Data can also be sniffed if you're not encrypting, and modified if you're not at least signing data sent over the wire.
Using a true message queue tends to leave you with fixed sized messages. If you have a large number of messages of wildly varying sizes this can become a performance problem. Using a socket can be a way around this, though you're then left trying to wrap this functionality to become identical to a queue, which is tricky to get the detail right on, particularly aspects like blocking/non-blocking and atomicity.
Shared memory is quick but requires management (you end up writing a version of malloc to manage the SHM) plus you have to synchronise and lock it in some way. Though you can use libraries to help with this the availability depends on your environment and language.
Queues are easy but have the downsides listed as pros to my socket discussion.
Pipes have been covered by Blagovests answer to this question.
As is ever the case with this kind of stuff I would suggest reading the W. Richard Stevens books on IPC and sockets. There is no better explanation than his! :-)
I'm currently writing an HTTP server in C so that I'll learn about C, network programming and HTTP. I've implemented most of the simple stuff, but I'm only handling one connection at a time. Currently, I'm thinking about how to efficiently add multitasking to my project. Here are some of the options I thought about:
Use one thread per connection. Simple but can't handle many connections.
Use non-blocking API calls only and handle everything in one thread. Sounds interesting but using select()s and such excessively is said to be quite slow.
Some other multithreading model, e.g. something complex like lighttpd uses. (Probably) the best solution, but (probably) too difficult to implement.
Any thoughts on this?
There is no single best model for writing multi-tasked network servers. Different platforms have different solutions for high performance (I/O completion ports, epoll, kqueues). Be careful about going for maximum portability: some features are mimicked on other platforms (i.e. select() is available on Windows) and yield very poor performance because they are simply mapped onto some other native model.
Also, there are other models not covered in your list. In particular, the classic UNIX "pre-fork" model.
In all cases, use any form of asynchronous I/O when available. If it isn't, look into non-blocking synchronous I/O. Design your HTTP library around asynchronous streaming of data, but keep the I/O bit out of it. This is much harder than it sounds. It usually implies writing state machines for your protocol interpreter.
That last bit is most important because it will allow you to experiment with different representations. It might even allow you to write a compact core for each platform local, high-performance tools and swap this core from one platform to the other.
Yea, do the one that's interesting to you. When you're done with it, if you're not utterly sick of the project, benchmark it, profile it, and try one of the other techniques. Or, even more interesting, abandon the work, take the learnings, and move on to something completely different.
You could use an event loop as in node.js:
Source code of node (c, c++, javascript)
https://github.com/joyent/node
Ryan Dahl (the creator of node) outlines the reasoning behind the design of node.js, non-blocking io and the event loop as an alternative to multithreading in a webserver.
http://www.yuiblog.com/blog/2010/05/20/video-dahl/
Douglas Crockford discusses the event loop in Scene 6: Loopage (Friday, August 27, 2010)
http://www.yuiblog.com/blog/2010/08/30/yui-theater-douglas-crockford-crockford-on-javascript-scene-6-loopage-52-min/
An index of Douglas Crockford's above talk (if further background information is needed). Doesn't really apply to your question though.
http://yuiblog.com/crockford/
Look at your platforms most efficient socket polling model - epoll (linux), kqueue (freebsd), WSAEventSelect (Windows). Perhaps combine with a thread pool, handle N connections per thread. You could always start with select then replace with a more efficient model once it works.
A simple solution might be having multiple processes: have one process accept connections, and as soon as the connection is established fork and handle the connection in that child process.
An interesting variant of this technique is used by SER/OpenSER/Kamailio SIP proxy: there's one main process that accepts the connections and multiple child worker processes, connected via pipes. The parent sends the new filedescriptor through the socket. See this book excerpt at 17.4.2. Passing File Descriptors over UNIX Domain Sockets. The OpenSER/Kamailio SIP proxies are used for heavy-duty SIP processing where performance is a huge issue and they do very well with this technique (plus shared memory for information sharing). Multi-threading is probably easier to implement, though.
Where can I find benchmarks on different networking architectures?
I am playing with sockets / threads / forks and I'd like to know what the best is. I was thinking there has got to be a place where someone has already spelled out all the pros and cons of different architectures for a socket service, listed benchmarks with code that runs.
Ultimately I'd like to run these various configurations with my own code and see which runs best in different circumstances.
Many people I talk to say that I should just use single threaded select. But I see an argument for threads when you're storing state information inside the thread to keep code simple. What is the trade off mark for writing my own state structure vs using a proven thread architecture.
I've also been told forking is bad... but when you need 12000 connections on a machine that cannot raise the open file per process limit, forking is an option! Forking is also a nice option for stability when you've got one process that needs restarting, it doesn't disturb the others.
Sorry, this is one of my longer questions... so many variables are left empty.
Thanks,
Chenz
edit: here's the link I was looking for, which is a whole paper answering your question. http://www.kegel.com/c10k.html
There are web servers designed along all three models (fork, thread, select). People like to benchmark web servers.
http://www.lighttpd.net/benchmark
Libevent has some benchmarks and links to stuff about how to choose a select() vs. threaded model, generally in favour of using the libevent model.
http://monkey.org/~provos/libevent/
It's very difficult to answer this question as so much depends on what your service is actually doing. Does it have to query a database? read files from the filesystem? perform complicated calculations? go off and talk to some other service? Also, how long-lived are client connections? Might connections have some semantic interaction with other connections, or are they all treated as independent of each other? Might you want to think about load-balancing your service across multiple servers later? (If so, you might usefully think about that now so that any necessary help can be designed in from the start.)
As you hint, the serving machine might have limits which interact with the various techniques, steering you towards one answer or another. You have a per-process file descriptor limit, but remember that you may also have a fixed size process table! How many concurrent clients are you expecting, anyway?
If your service keeps crashing and you need to keep restarting it or you think you want a multi-process model so that connections are isolated from each other, you're probably doing it wrong. Stability is extremely important in this sort of context, and that means good practice and memory hygiene, both in general and in the face of network-based attacks.
Remember the history... fork() is cheap in the Unix world, but spawning new processes relatively expensive on Windows. OTOH, Windows threads are lightweight, whereas threading has always been a bit alien to Unix and only relatively recently become widespread.