handle client buffer in tcp server - c

Since i read a lot text and code about socket programming i decided to go like that:
TCP Server:
Socket multiplexing
asynchronious I/O
I want to be able to handle 800-1200 client connections at the same time. How do i handle client buffers? Every single example i read worked with just one solely buffer. Why dont people use something like:
typedef struct my_socket_tag {
socket sock;
char* buffer;
} client_data;
Now I am able to give the buffer away from a receiver-thread to a dispatch-request-thread and the receiving could go on on another socket while the first client specific buffer is processed.
Is that common practice? Am I missing the point?
Please give some hints, how to improve my question next time, thank you!

The examples are usually oversimplified. Scalability is a serious issue, and I suggest it would be better to begin with simpler applications; handling a thousand of client connections is possible but in most applications it will require quite a careful development. Socket programming may get tricky.
There are different kinds of server applications; there is no single approach that would fit all tasks perfectly. There are lots of details to consider (is it a stream or datagram oriented service? are the connections, if any, persistent? does it involve lots of small data transfers, or few huge transfers, or lots of huge transfers? Et cetera, et cetera). This is why you are not likely to see any common examples in books.
If you choose threading approach, be careful not to create too many threads; one thread per client is usually (but not always) a bad choice. In some cases, you can even handle everything in a single thread (using async IO) without sacrificing any performance.
Having said that, I would recommend learning C++ and boost asio (or a similar framework). It takes care of many scalability-related problems, so there's no point in reinventing the wheel.
You may study the Architecture of Open Source Applications book (freely available). There are quite a few relevant examples that you may find useful.

Related

Epoll vs Libevent for Bittorrent like application

I am implementing bit torrent for P2p file sharing. Let's say, Maximum of among 100 peers sharing simultaneously. TCP Connections are setup between each peer to every other peer. Initially, One peer has whole file and it starts sharing pieces and subsequently, all peers share their pieces.
Typically, piece size is 50kB - 1MB. I am wondering, What is the best approach to write such application in C. Using threads with epoll or libevent??
Can anybody please give positives/negatives of different possible approaches??
If we're only talking about 100 peer connections at any given moment, the traditional approach of using select or poll on a group of TCP sockets will work out just fine.
EPoll helps solve the problem of when you need to scale to thousands of long running connections. Read the doc on the C10K problem for more details.
I've heard good things about libevent. I believe it's an abstraction on top of epoll and other socket functions that provides a few nice things. If it makes your programming easier, then by all means use it. But you probably don't need it for performance.
Libevent is essentially a wrapper around epoll, mostly considered for writing good portable code. Since its a wrapper, drawbacks of epoll will be retained and does not add much from performance perspective. If portability is not concern, epoll should just work fine. Even better, if the volume is considerably less than one still use poll.

Whats the advantages and disadvantages of using Socket in IPC

I have been asked this question in some recent interviews,Whats the advantages and disadvantages of using Socket in IPC when there are other ways to perform IPC.Have not found exact answer .
Any help would be much appreciated.
Compared to pipes, IPC sockets differ by being bidirectional, that is, reads and writes can be done on the same descriptor. Pipes, unlike sockets, are unidirectional. You have to keep a pair of descriptors if you want to do both reads and writes.
Pipes, on the other hand, guarantee atomicity when reading or writing under a certain amount of bytes. Writing something less than PIPE_BUF bytes at once is guaranteed to be delivered in one chunk and never observed partial. Sockets do require more care from the programmer in that respect.
Shared memory, when used for IPC, requires explicit synchronisation from the programmer. It may be the most efficient and most flexible mechanism, but that comes at an increased complexity cost.
Another point in favour of sockets: an app using sockets can be easily distributed - ie. it can be run on one host or spread across several hosts with little effort. This depends of course on the nature of the app.
Perhaps this is too simplified an answer, yet it is an important detail. Sockets are not supported on all OS's. Recently, I have been aware of a project that used sockets for IPC all over the place only to find that they were forced to change from Linux to a proprietary OS which was POSIX, but did not support sockets the same way as Linux.
Sockets allow you a few benefits...
You can connect a simple client to them for testing (manually enter data, see the response).
This is very useful for debugging, simulating and blackbox testing.
You can run the processes on different machines. This can be useful for scalability and is very helpful in debugging / testing if you work in embedded software.
It becomes very easy to expose your process as a service
But there are drawbacks as well
Overhead is greater than IPC optimized for a single machine. Shared memory in particular is better if you need the performance, and you know your processes are all on the same machine.
Security - if your client apps can connect so can anyone else, if you're not careful about authentication. Data can also be sniffed if you're not encrypting, and modified if you're not at least signing data sent over the wire.
Using a true message queue tends to leave you with fixed sized messages. If you have a large number of messages of wildly varying sizes this can become a performance problem. Using a socket can be a way around this, though you're then left trying to wrap this functionality to become identical to a queue, which is tricky to get the detail right on, particularly aspects like blocking/non-blocking and atomicity.
Shared memory is quick but requires management (you end up writing a version of malloc to manage the SHM) plus you have to synchronise and lock it in some way. Though you can use libraries to help with this the availability depends on your environment and language.
Queues are easy but have the downsides listed as pros to my socket discussion.
Pipes have been covered by Blagovests answer to this question.
As is ever the case with this kind of stuff I would suggest reading the W. Richard Stevens books on IPC and sockets. There is no better explanation than his! :-)

C HTTP server - multithreading model?

I'm currently writing an HTTP server in C so that I'll learn about C, network programming and HTTP. I've implemented most of the simple stuff, but I'm only handling one connection at a time. Currently, I'm thinking about how to efficiently add multitasking to my project. Here are some of the options I thought about:
Use one thread per connection. Simple but can't handle many connections.
Use non-blocking API calls only and handle everything in one thread. Sounds interesting but using select()s and such excessively is said to be quite slow.
Some other multithreading model, e.g. something complex like lighttpd uses. (Probably) the best solution, but (probably) too difficult to implement.
Any thoughts on this?
There is no single best model for writing multi-tasked network servers. Different platforms have different solutions for high performance (I/O completion ports, epoll, kqueues). Be careful about going for maximum portability: some features are mimicked on other platforms (i.e. select() is available on Windows) and yield very poor performance because they are simply mapped onto some other native model.
Also, there are other models not covered in your list. In particular, the classic UNIX "pre-fork" model.
In all cases, use any form of asynchronous I/O when available. If it isn't, look into non-blocking synchronous I/O. Design your HTTP library around asynchronous streaming of data, but keep the I/O bit out of it. This is much harder than it sounds. It usually implies writing state machines for your protocol interpreter.
That last bit is most important because it will allow you to experiment with different representations. It might even allow you to write a compact core for each platform local, high-performance tools and swap this core from one platform to the other.
Yea, do the one that's interesting to you. When you're done with it, if you're not utterly sick of the project, benchmark it, profile it, and try one of the other techniques. Or, even more interesting, abandon the work, take the learnings, and move on to something completely different.
You could use an event loop as in node.js:
Source code of node (c, c++, javascript)
https://github.com/joyent/node
Ryan Dahl (the creator of node) outlines the reasoning behind the design of node.js, non-blocking io and the event loop as an alternative to multithreading in a webserver.
http://www.yuiblog.com/blog/2010/05/20/video-dahl/
Douglas Crockford discusses the event loop in Scene 6: Loopage (Friday, August 27, 2010)
http://www.yuiblog.com/blog/2010/08/30/yui-theater-douglas-crockford-crockford-on-javascript-scene-6-loopage-52-min/
An index of Douglas Crockford's above talk (if further background information is needed). Doesn't really apply to your question though.
http://yuiblog.com/crockford/
Look at your platforms most efficient socket polling model - epoll (linux), kqueue (freebsd), WSAEventSelect (Windows). Perhaps combine with a thread pool, handle N connections per thread. You could always start with select then replace with a more efficient model once it works.
A simple solution might be having multiple processes: have one process accept connections, and as soon as the connection is established fork and handle the connection in that child process.
An interesting variant of this technique is used by SER/OpenSER/Kamailio SIP proxy: there's one main process that accepts the connections and multiple child worker processes, connected via pipes. The parent sends the new filedescriptor through the socket. See this book excerpt at 17.4.2. Passing File Descriptors over UNIX Domain Sockets. The OpenSER/Kamailio SIP proxies are used for heavy-duty SIP processing where performance is a huge issue and they do very well with this technique (plus shared memory for information sharing). Multi-threading is probably easier to implement, though.

Trying To Implement Concurrent TCP Server and Client in c

I have implemented TCp concurrent server and client in c using threads as well as forks.
But I don't have any way to check whether there is any other standard way of implementing
this.
I have goggled for standard coding stuff but didnt find anything useful.
Can someone pl z share some good links or code so that I can have a standard idea of
implementing Concurrent servers.
Thanks for help
There's no "standard idea". Your approach is going to depend on requirements, performance, scalability, and amount of time allowed for development.
One thread per client
Possibly with a threadpool
Multi-threaded pipeline model, with N workers
One thread per server, using poll/select
One thread per server, event-based with callbacks
Forking children, one per client connection
pre-forking children, e.g. Apache web server
Etc. All of these have their uses.
Some good links for you:
Beej's Guide to Network Programming will help you get the basics down,
The C10K problem will give you an overview of the design landscape,
High-Performance Server Architecture will make you re-think the "standard" approaches.
Hope this helps.
Concurrent servers (as long as they are quite simple, and performance is not much of an issue) are often created with poll() or select(). Now, this is assuming you are on *nix.
If you can use C++, the boost libraries have ASIO , which is a cross-platform library that allows you to write once and compile everywhere. There isn't really a standard way to do things since the ideas vary from OS to OS.

Where can I find benchmarks on different networking architectures?

Where can I find benchmarks on different networking architectures?
I am playing with sockets / threads / forks and I'd like to know what the best is. I was thinking there has got to be a place where someone has already spelled out all the pros and cons of different architectures for a socket service, listed benchmarks with code that runs.
Ultimately I'd like to run these various configurations with my own code and see which runs best in different circumstances.
Many people I talk to say that I should just use single threaded select. But I see an argument for threads when you're storing state information inside the thread to keep code simple. What is the trade off mark for writing my own state structure vs using a proven thread architecture.
I've also been told forking is bad... but when you need 12000 connections on a machine that cannot raise the open file per process limit, forking is an option! Forking is also a nice option for stability when you've got one process that needs restarting, it doesn't disturb the others.
Sorry, this is one of my longer questions... so many variables are left empty.
Thanks,
Chenz
edit: here's the link I was looking for, which is a whole paper answering your question. http://www.kegel.com/c10k.html
There are web servers designed along all three models (fork, thread, select). People like to benchmark web servers.
http://www.lighttpd.net/benchmark
Libevent has some benchmarks and links to stuff about how to choose a select() vs. threaded model, generally in favour of using the libevent model.
http://monkey.org/~provos/libevent/
It's very difficult to answer this question as so much depends on what your service is actually doing. Does it have to query a database? read files from the filesystem? perform complicated calculations? go off and talk to some other service? Also, how long-lived are client connections? Might connections have some semantic interaction with other connections, or are they all treated as independent of each other? Might you want to think about load-balancing your service across multiple servers later? (If so, you might usefully think about that now so that any necessary help can be designed in from the start.)
As you hint, the serving machine might have limits which interact with the various techniques, steering you towards one answer or another. You have a per-process file descriptor limit, but remember that you may also have a fixed size process table! How many concurrent clients are you expecting, anyway?
If your service keeps crashing and you need to keep restarting it or you think you want a multi-process model so that connections are isolated from each other, you're probably doing it wrong. Stability is extremely important in this sort of context, and that means good practice and memory hygiene, both in general and in the face of network-based attacks.
Remember the history... fork() is cheap in the Unix world, but spawning new processes relatively expensive on Windows. OTOH, Windows threads are lightweight, whereas threading has always been a bit alien to Unix and only relatively recently become widespread.

Resources