Multi Threaded Server design - c

I am trying to implement a TCP server which is a part of a larger project. Basically the server should be able to maintain a TCP connection with any number of clients (a minimum of 32) and service any client that requests servicing. In our scenario the thing is that it will be assumed that once the client is connected to the server, it will never close the connection unless some sort of failure occurs (e-g the machine running the client breaks down ) and it will repeatedly request service from the server. Same is the case with all the other clients i-e each will maintain a connection with the server and perform transactions. so to sum up the server will be at the same time maintaining the connection with the clients while simultaneously serving each client as needed and should also have the ability to accept any other client connections that want to connect to the server.
Now I implemented the above functionality using the select() system call of the berkely socket API and it works fine when we have a small number of clients (say 10). But the server needs to be scaled to the highest possible level as we are implementing it on a 16 core machine. For that I looked through various multi threading design techniques e-g one thread per client etc and the best one in my opinion would be a thread pool design. Now As I was about to implement that I ran into some problems:
If I designate the main thread to accept any number of incoming connections and save each connections File descriptor in a data structure, and I have a pool of threads, how would I get the threads to poll that whether a particular client is requesting for service or not. The design is simple enough for scenarios in which client contacts the server and after getting the service it closes the connection so that we can pick a thread from a pool, service the client and then push it back into the pool for future connection handling. But when we have to service a set of clients that maintain a connection and request services intermittently, what would be the best approach to do this. All help will be much appreciated as I am really stuck in this.
Thanks.

Use pthreads, with one thread per CPU plus one extra thread.
The extra thread (the main thread) listens for new connections with the listen() system call, accepts the new connections with accept(), then determines which worker thread currently has the least number of connections, acquires a lock/mutex for that worker thread's "pending connections" FIFO queue, places the descriptor for the accepted connection onto the worker thread's "pending connections" FIFO queue, and sends a "check your queue" notification (e.g. using a pipe) to the worker thread.
The worker threads use "select()", and send/receive data to whatever connections they've accepted. If/when a worker thread receives a "check your queue" notification from the main thread it would acquire the lock/mutex for its "pending connections" FIFO queue and add any newly accepted connections to its "fd_set" list.
For 1024 connections and 16 CPUs; you might end up with one main thread waiting for new connections (but doing almost nothing as you wouldn't be expecting many new connections), and 16 worker threads handling an average of 64 connections each.

One thread per client is almost certainly the best design. Make sure you always have at least one thread blocked in accept waiting for a new connection - this means that after accept succeeds, you might need to create a new thread before proceeding if it was the last one. I've found semaphores to be a great primitive for keeping track of the need to spawn new listening threads.

Related

Simplest way to awake multiple processes with a single broadcast?

Context: this is a web/sqlite application. One process receives new data over TCP, and feed them to a SQLite database. Other processes (number is variable) are launched as required as clients connect and request updates over HTML5's server-side events interface (this might change to websocket in the future).
The idea is to force the client apps to block, and to find a way for the server to create a notification that will wakeup all awaiting clients.
Note that the clients aren't fork'ed from the server.
I'm hoping for a solution that:
doesn't require clients to register themselves to the server
allows the server to broadcast even if no client is listening - and doesn't create a huge pile of unprocessed notifications
allows clients to detect that server isn't present
allows clients to define a custom timeout (maximum wait time for an event)
Solutions checked:
sqlite3_update_hook() - only works within a single process (damned, that would have been sleek)
signals: I still have nightmares about the last time I used signals. Maybe signalfd would be better (server creates a folder, client create unique files, and server notifies all files in that folder)
iNotify - didn't read enough on this one
semaphores / locks / shared memory - can't think of a non-hacked way to use these. The server could update a shared memory area with the row ID of the line just inserted in the DB, but then what?
I'm sure I'm missing something obvious - but what? At this time, polling seems to be the best option!
Thanks.
Just as a suggestion can you try message queues? multiple clients can connect to the same queue and receive one broadcast message, each client can have its own message queue if it requires communication with the server.
Message queues are implemented by Linux OS and they are very reliable. I personally use message queues to pass messages from several clients to a central routing daemon, clients being responsible of processing and returning the altered data.

creating two sockets at udp server at a time, how to bind()?

I want two functionalities to be implemented on my udp server application.
Creating thread that continuously receives data coming from any client.
Creating a thread that continuously sends data on server socket after specific time period and waits for reply from client. (I implemented this to make aure that whenever any client goes down, the data is not received back from client and server comes to know that client is down.)
Now, the problem I am facing is that since two threads are sharing same connected socket, whenever both threads try to access this socket simultaneously, a deadlock is established.
One of the solution I found was to create two sockets. One that continuously receives data, and the other socket that is meant for sending data from server time to time to clients and wait for their response, but since Server can must be bind()ed and I have bind()ed my socket to INADDR_ANY once, how would I create a separate socket for sending data from server and waiting for replies from client.
Please help me with this complication.
Also do let me know if there is some other better way of its implementation.
Thanks in advance :)
You will have to use non-blocking net functions and use a mutex to ensure no two threads access the socket at once.
A single thread may, however, be enough, if you use non-blocking functions. Using many threads will probably not improve performance, but may make the code more readable.

Use of Listen() sys call in a multi threaded TCP server

I am in the middle of a multi-threaded TCP server design using Berkely SOCKET API under linux in system independent C language. The Server has to perform I/O multiplexing as the server is a centralized controller that manages the clients (that maintain a persistent connection with the server forever (unless a machine on which client is running fails etc)). The server needs to handle a minimum of 500 clients.
I have a 16 core machine, what I want is that I spawn 16 threads(one per core) and a main thread. The main thread will listen() to the connections and then dispatch each connection on the queue list to a thread which will then call accept() and then use the select() sys call to perform I/O multiplexing. Now the problem is how do I know that when to dispatch a thread to call accept() . I mean how do I find out in the main thread that there is a connection pending at the listen() so that I can assign a thread to handle that connection. All help much appreciated.
Thanks.
The listen() function call prepares a socket to accept incoming connections. You then use select() on that socket and get a notification that a new connection has arrived. You then call accept on the server socket and a new socket id will be returned. If you like you can then pass that socket id onto your thread.
What I would do is have a single thread for accepting connections and receiving data which then dispatches the data to a queue as a work item for processing.
Note that if each of your 16 threads is going to be running select (or poll, or whatever) anyway, there is no problem with them all adding the server socket to their select sets.
More than one may wake when the server socket has in incoming connection, but only one will successfully call accept, so it should work.
Pro: easy to code.
Con:
naive implementation doesn't balance load (would need eg. global
stats on number of accepted sockets handled by each thread, with
high-load threads removing the server socket from their select sets)
thundering herd behaviour could be problematic at high accept rates
epoll or aio/asio. I suspect you got no replies to your earlier post because you didn't specify linux when you asked for a scalable high-performnce solution. Asynchronous solutions on different OS are implemented with substantial kernel support and linux aio, Windows IOCP etc. are different enough that 'system independent' does not really apply - nobody could give you an answer.
Now that you have narrowed the OS down to linux, look up the appropriate asynchronous solutions.

Interleaved messages from multiple clients to a server

This question is related to Socket programming in C and Sleeping a worker thread in a file server.
I am very new to socket as well as pthreads and having to handle quite a large project.
I would like to know if a scenario as below is possible and how?
I have multiple clients to a server and each client sends multiple messages to the server.Each client is serviced by a task/worker thread. A client sends a message and upon receiving a reply sends the next message till it is done and closes the connection. The task thread process one request from the client, sends its reply and sleeps till it receives the next message from the same client,till the client closes connection and the thread exits.
Now, as I said multiple clients connect at the same time. Will the server process all messages from one client and then service the next or receive messages in an interleved manner as it arrives keeping connections of all 'live' clients open.
Will the server process all messages from one client and then service the next or receive messages in an interleved manner as it arrives keeping connections of all 'live' clients open.
Server process can handle multiple clients at the same time or in an interleved manner, depending you CPU and programming architecture.
Threaded programming + multi-core or multi-CPU can handle those requests at the same time. ^_^

Another process and thread question

This question is related to Many processes executed by one thread and Socket server or file server implementation using multiple threads: concept not clear.
I am still unclear about a few things. Now the client and server of a socket server or a file server need not be in different machines(ofcourse it can be too).
The requests the servers receive are from different PROCESSES but they are processed by threads(say one per process) and these task threads belong to a different process(the server process). What I am confused about is how can calls from different processes be processed by threads of a single process and these threads communicate using 'shared memory" architecture that is so "THREAD" and very unlike "PROCESSES"
Thanks
Some simple ground work. Your server process contains one or more threads to process requests from any number of client processes. The clients and server can be on either the same or different machines. Clients and server are "connected" by sockets which are used to send requests from the client to the server. The same socket will be used to provide a response to the client once the request has been processed. Each client will have a unique connection to the server.
There are many ways to implement a server as described above. One possibility is that the server has one thread which handles the sockets using select(). Lets call this the Main Thread. The server process will also have several threads that will be responsible for processing the requests and responding to the clients. Lets call these Worker Threads.
When the Main Thread receives a message from one of the its client's sockets, the Main Thread will take this request and hand it to one of the Worker Threads for processing. The Worker Thread will take that request and process it and then respond using the original socket.
This server model uses a producer/consumer model where the Main Thread is the producer (in that it takes a request from the socket and produces a piece of work requiring processing) and the consumers are the Worker Threads.
There a several challenges in implementing this type of server all of which are documented and discussed in various Data Structure and Algorithms text, not the least of which are:
How do the Main and Worker threads communicate?
How do I protect data shared by the various threads from simultaneous modification?
How do I select which Worker thread should process a request?
I hope this helps.

Resources