I find it hard to believe there isn't an answer or tutorial for this, but am struggling to find one anywhere!
I have to (and have) build a multithreaded server to handle GET requests in C.
For full marks this needs to use a thread pool. Currently my main thread accepts connections and passes them on to a new thread.
I can find a few implementations of thread pools in c online, but coming from a Java background understanding them is proving difficult. They also all seem to use a task queue.
This seems unnecessary considering you can tell the listen call to queue connections.
I saw somewhere that accept is thread safe (saying that I also hear when POSIX says safe its more of a safeish?)
Is this a sensible approach to take? Or will the overhead be higher with each thread waiting on accept instead of stopping exection until passed a connection?
If that is the case how in C would I go about doing this? I presume i would need to keep a thread safe data structure storing pointers to each thread and a value indicating if they are busy or not?
And have some method to restart the thread and pass it a connection? But I have no idea how to do this and can't find any simple tutorials on the internet.
Any advice or links to tutorials would be much appreciated!
Thanks
Accept() is thread-safe.
Actually what you describe is an elegant way to implement a socket server using a thread pool - call accept() in all of them, and the operating system will take care of waking only one thread when a connection arrives. Good job, I have never really thought about this option when I had to implement such things.
As far as I see there's no real overhead in calling accept() in multiple threads at the same time - all threads will sleep until a connection can be accepted, so they won't effectively consume any CPU time.
Related
I have a network application on a gateway. It receives and sends packets. For most of them, my gateway acts as a router, but in some cases, it can receive packets too.
Should I have:
only one main thread
a main thread + a dispatch thread in charge of giving it to the correct flow handler
as many threads as there are flows
something else.
?
Doing multithreading correctly is no simple matter, in many cases a select and friends based solution will be a whole lot easier to create.
Your case sounds a lot like a typical Unix service daemon. The popular solution to your problem is not to use threads, but forks.
The idea is that your program listens on the socket and waits for connections. As soon as a connection arrives, it forks. The child process then continues to process the connection. The father process itself just continues in the loop and waits for incoming connections.
Advantages over threading:
Very simple program design
No problems with concurrency
Established method for Unix/Linux systems
Disadvantages:
Things get complicated when several connections interact with each other (your use case doesn't sound like they would)
Performance penalty on Windows systems (not on Unix systems!)
You can find many code examples online.
I don't know much about networking applications, but I think it's like this:
If you have the ability to react asynchronous to the requests you would probably use just one single thread (like in Node.JS). If you won't be able to react asynchronous the main thread would always block the other actions.
If you are not able to react asynchronous on your requests you have to use more than one thread. But you could achieve that in many different ways: you could create for every request a thread, or a limited number of threads and assign them then to your requests.
My personal preference is use one main thread and one worker thread per connection. No cap whatsoever. I am assuming that your server will be stateless like a HTTP server.
For stateful servers you will have to figure out some way to control number of threads.
I'm writing a TCP server/client application on Windows, to become familiar with the Winsock API. I come from an UNIX background and would like to know which of these could be the best approach to implement the application:
First the specification
Must scale well on multiprocessor and single-processor systems.
No hardset limit of connections.
Application can both listen for connections, acting as server, and act as client.
Multi threaded.
First approach:
Non-blocking select-like socket for listening, in the 'server' thread.
for each client connecting we spawn a separate thread.
Second approach:
Blocking socket for listening, in the 'server' thread.
for each client connecting we spawn a separate thread.
Third approach:
Non-blocking select-like socket for listening, in the 'server' thread.
No separate thread for each incoming connection, the protocol would need state information kept across sessions I suppose.
I wonder what is the most efficient and scalable approach, and especially if it can work with a UDP socket too.
Note: I'm writing the application in plain and old C. No .NET nor C++ involved, C++ exceptions disabled too.
As Gary says, I/O Completion Ports are the most efficient way to manage multiple network connections in a non-blocking/async manner on Windows platforms.
With IOCP you get notified when your networking operations complete and you can process these completions with a small number of threads. You get to decide how many threads you allocate to process the completions and the kernel decides when to use the threads that you're providing. It uses them in a LIFO order, to reduce context switching, so that if you are only using the minimal number of threads required at any point and you're reusing the same threads rather than cycling through all of the threads that you have available for use.
The asynchronous nature of IOCP programming can be a little confusing to start with, but once you get the hang of it it's fairly straight forward.
I have some free IOCP server code which demonstrates the basics and provides some example servers that are pretty easy to build on. You can find the code here: http://www.serverframework.com/products---the-free-framework.html. That page also links to some articles that I wrote to explain the code.
Relating this to the detail of your question. You should be looking at a variation on your third approach. Use AcceptEx() to accept new connections, this can be used in an asynchronous manner and so you don't need a separate thread for connection acceptance and can use the threads that are also processing your overlapped/async read and write operations.
I've written an asynchronous client which does not use blocking sockets, so if you're interested in that approach, then take a look at my client: http://codesprout.blogspot.com/2011/04/asynchronous-http-client.html
It's an HTTP client, but I've shown very little HTTP protocol processing in there, it's all just .NET sockets. The server would work in a similar way: you can take advantage of the *Async methods such as AsseptAsync.
Under Windows, the best performances are achieved by using I/O completion calls.
This is because the lists and queuing mechanism is done in the kernel, far from the heavy user-mode overhead (which drags your code down if you dare to do the hard work yourself).
Unfortunately, Windows I/O completion calls need to allocate many threads to scale and this is quickly killing the performances (as compared to Linux epoll which can scale independently of the number of worker threads you decide to involve in the task).
Recently, I discovered http://gwan.com/ a Web server which came from Windows and was then ported under Linux. And their authors describe the problem in details on their forum.
I've found out that native files access has no "non-blocking" state. (I'm correct?)
I've been googling for daemons which are "non-blocking", and I've found one which achieved said behavior by threading file access operations, so that the daemon won't block.
My question is, wouldn't threading and IPC'ing such operations be rather expensive? wouldn't it make more sense to either:
A) Pre-thread pool, simply have each client at a thread and let it block for which ever blocking operations it might need. Or,
B) In case of file access blocking, use a relatively small buffer, that way it's still blocking - but one would assume that a tiny buffer for multiple operations would make more sense than paying the price of threading each operation and IPC it?
If you use threading, little IPC overhead is needed. You have the same memory space for all your threads, so a simple mutex or semaphore may be all you need. Now, if you are blocking on a mutex or semaphore too long or too often, why use async I/O in the first place?
As to the actual computation performed by threads doing I/O, they are waiting for the kernel to wake them up most of the time, so I wouldn't worry.
If your application is going to revolve around reading files and other I/O sources, you may want to read up on Reactor patterns, and event-driven programming.
Also, you mentioned a daemon, and servicing clients. If the service you provide is reading files, the computational cost of spawning a new thread to serve each client is minimal, since each individual thread will take "long" to complete requests, and block most of the time anyway. There may be a memory problem if your client count is in the thousands, but otherwise I think you'll do okay.
Give us a little more detail about what you want to do, maybe there are more straightforward ways.
I have to implement a game server in C which handles multiple clients and continuously exchange information with them. The client may not be sending information at all times.Should I assign a thread with non-blocking socket to them or use select() call.
Which one is better?
Both will work just as well in most cases. Note that the thread version will use blocking sockets, and the select-based uses non-blocking sockets. In the case of a server, you may feel that events for data received is a good model.
The threaded version will have the memory-overhead of allocating a stack for each thread (often the size of a page), but you can program as if you have only one client.
The evented version needs to maintain state between callbacks, which can be more work, but again, in servers it feels quite natural.
select() is the way to go, that's what it's made for. If you go for the threaded non-blocking approach, you will have to implement a sleep after each tick or the thread will use all available cpu time. So, the worst case response time, if one client is sending data, is your sleep value. You could also implement one thread per socket and make it blocking, but depending on how many sockets you have, that will be much overhead.
With select() you can watch all sockets at once (no matter if they are blocking or not, btw) and only have to process those which are active.
If you are on linux an have many sockets to watch, you can take a look at epoll()
I have a server that connects to multiple clients using TCP/IP connections, using C in Unix. Since it won't have more than 20 connections at a time, I figured I would use a thread per connection/socket. But the problem is writing to the sockets as I'll be sending user prompted msgs to clients. Once each socket is handled by a thread, how do I interact with the created thread to write to the sockets? Should each thread just read from the sockets and I'll write to sockets in the main program? Not sure if that's a good way to go about it.
My rule of thumb is that any given socket should only be operated on by a single thread(*). So if you spawn a separate I/O thread for each socket, and your main thread wants something written to an I/O thread's socket, then the main thread should send that data to the I/O thread, whereupon the I/O thread can write it to the socket.
Of course, this means you need to have a good communications method between the main thread and the I/O thread; which you could do by spawning a socket-pair for each I/O thread and having the I/O threads select()/poll() on their end of the socket-pair (to handle data coming from the main thread) as well as on their network socket.
But once you've done that, you're dealing with complexity of using select()/poll() AND multithreading, which is a lot of complexity overhead. So unless you absolutely need multithreading for some reason, I agree with the previous posters -- it's better to just handle all the sockets in a single thread, via select() or poll().
(*) It's possible to have multiple threads reading/writing to the same socket at the same time, but it's error-prone. In particular, startup and shutdown sequences can be tricky to get 100% right. That's why I try to avoid 'sharing' a given socket amongst multiple threads.
Sounds like you'd probably be better with a single thread and multiplexing the sockets (using select, poll etc). This will avoid the race conditions and locking requirements which will otherwise make the program more difficult to write.
Unless you are doing significant processor-intensive work, or waiting for IO on behalf of these clients, you'll get no benefit from using threads anyway, but the race conditions will still be there.
So I'd say, get a working implementation using a single thread, THEN if in performance testing you discover that it is lacking, refactor it to use multithreading if that seems like the best option to beat the performance problems (of course you'll be profiling it etc).
Having the main thread write to the sockets is fine, you only need to worry about having multiple threads writing to a socket at the same time.
However, I'd test the performance of using a single thread and select/poll before bothering with the muti threaded approach.