C Multithreading Deadlock's Thread Events - c

I am trying to perform multithreading on a socket in C in order to develop a connector between two different software applications. I would like it to work in the following manner. One piece of software will start running as the server, it will be performing a variety of functions including listening for a socket connection on a designated port. This software will function by it self and only use data from the connected network socket when it is established and receiving reliable data. So for this piece I would like to be able to listen to a connection, and when one is made fork a process and when data is received from this socket set some variable that will be used by some other update thread to notify it that it has these extra precision information that can be considered. On the other side of this equation I want to create a program that when it boots up will attempt to connect to the port of the other application, once this connects it will then simply call a function that will send out the information in non blocking fashion. My whole goal is to create a connector that will allow the programmers of the other two pieces of code to feel as tho they aren't dealing with a socket what so ever.
I have been able to get multi threaded socket communication going but I am now trying to modify this so it will be usable as I have described and I am confused as to how to avoid multiple access to that variable that will notify the system on the server side that the data has arrived as well as create the non-blocking interaction on the client side. Any help ill be appreciated.
-TJ

The question is not so clear to me, but if you need to make different pieces of software talking easily you can consider using a framework message library like ZeroMQ www.zeromq.org

It seems like you have a double producer-consumer problem here:
Client side Server
producer -> sender thread -> receiver thread -> consumer thread
In this case, the most useful data structure to use is a blocking queue on both sides, like intel TBB's concurrent_bounded_queue.
This allows you to post tasks from one thread and have another thread pull the data when it's available in a thread-safe manner.

Related

Is threading the best way to handle 40 Clients at a time in UDP Server?

I am working on a UDP server/client application.
I want my server to be able to handle 40 clients at a time. I have thought of creating 40 threads at server side, each thread handling one client. Clients are distinguished on the basis of IP addresses and there is one thread for each unique IP address.
Whenever a client sends some data to a server, the main thread extracts the IP address of the client and decides which thread will process this specific client. Is there a better way to achieve this functionality?
There are different approaches for scale able server application, one thread per client seems good if no of clients are not many, another most efficient approach to accomplish this task is to use thread pool. These threads are work as task base when ever you have any new task assign this task to free worker thread.
Take a look at this project, I think it is very helpful to start with: http://www.codeproject.com/Articles/16935/A-Chat-Application-Using-Asynchronous-UDP-sockets
With IPAddress.Any, we specify that the server should accept client
requests coming on any interface. To use any particular interface, we
can use IPAddress.Parse (“192.168.1.1”) instead of IPAddress.Any. The
Bind function then bounds the serverSocket to this IP address. The
epSender identifies the clients from where the data is coming.
With BeginReceiveFrom, we start receiving the data that will be sent
by the client. Note that we pass epSender as the last parameter of
BeginReceiveFrom, the AsyncCallback OnReceive gets this object via the
AsyncState property of IAsyncResult, and it then processes the client
requests (login, logout, and send message to the users). Please see
the code attached to understand the implementation of OnReceive.
A better way would be to use the Proactor pattern (take a look at Boost.Asio library), instead of creating thread per client. With such an approach your application would have much better scalability and performace (especially on platforms that have native async i/o)
Besides, with this technique the threading would be de-coupled from the concurrency, meaning that you don't necessarily have to mess with multi-threading with all its complications.

Send same info to multiple threads/sockets?

I am writing a server application that simply connects a local serial port to multiple network connected clients. I am using linux and C for the server application because the equipment for the program is a router with limited memory.
I have everything setup for multiple clients to connect and send data to the serial port using a fork() process for each connection.
My problem lies in getting data incoming on the serial port out to the multiple (varing number) client connections. my problem lies in designing a way for each active socket to get all of the incoming data, and to only get it once. Any help?
Sounds like you need a data queue (buffer) for each connected client. Each time data comes in on the port, you post it to the back of each client's queue. The clients then read the data from the front of their respective queues. Since all the clients will probably read at different rates/times, this will ensure all of them get a copy of the data only once, and you won't get hung up waiting for any one client while more data comes in. Of course, you'll need to allocate a certain amount of memory for each connected client's queue (I'm not sure how many clients you're expecting, and you did say your available memory is limited), and you need to consider what to do if a queue gets full before the client reads all of it.
Presumably you keep a list or some other reference of/to connected clients, why not just loop over that for each bit of information and send it to all of them?
A thread per socket design might not be the best way to solve this. An event driven asynchronous approach should be a much better fit. However, if you must do it with threads, and given that serial ports are slow anyway, building a pipe between the thread listening to the serial port and all the threads talking to the network clients is the most practical. You could do fancy things with rwlocks to move the data, but you'll still need a way for the network threads to wait on both the socket and the data from the serial port, so you need to use file descriptors for both and something like poll.
But seriously, this would likely be much easier and would perform better without the threads. Think of it as a main loop which waits on poll which is watching the network and the serial port, determines which event occurred, and distributes data accordingly. It should be easier all around once you get the idea.

Arbitrary two-way UNIX socket communication

I've been working on a complex server-client system in C and I'm not sure how to implement the socket communication.
In a nutshell, the system is a server application which communicates with a database and uses a UNIX socket to communicate with one or more child processes created with fork(). The purpose of the children is to run game servers. The process of launching a game server is like this:
The server/"manager" identifies a game server in the database that is to be made. (Assume database communication is already sorted.)
The manager forks a child (the "game controller").
The game controller sets up two pipe pairs, then forks, replacing its child's stdin with a pipe, and it's stdout and stderr with another pipe.
The game controller's child then runs execlp() to begin running the actual game server executable.
My experience with sockets is fairly minimal. I have used select() on a server application before to 'multiplex' numerous clients, as demonstrated by the simple example in the GNU C documentation here.
I now have a new challenge, as the system must be able to do more: the manager needs to be able to arbitrarily send commands to the game controller children (that it will find by periodically checking the database) and get replies, but also expect incoming arbitrary commands/errors from them and send replies back.
So, I need a sort-of "context" system, where sockets are meaningful only between themselves. In other words, when a command is sent from the manager to the game controller, each party needs to be aware of who is asking and know what the reply is (and, therefore, which command it is a reply to).
Because select() is only useful for knowing when we have incoming data, and a thread should block on it, would I need another thread that sends data and gets the replies? Will this require each game controller, although technically a 'client', to use a listening socket and use select() as well?
I hope I've explained the system and the problem concisely; I will add more detail if required. Thanks!
Ok, I am still not really sure I understand exactly where your trouble is, so I will just spout off some things about writing a client/server app. If I am off track, just let me know.
The way that the server will know which clients corresponds to which socket is that the clients will tell the server. Essentially, you need to have a log-in protocol. When the game controller connects to the server, it will send a message that says "Hi, i am registering as controller foo1 on host xyz, port abc..." and whatever else the server needs to know about its clients. The server will keep a data structure that maps sockets to client metadata, state, etc. Whenever it gets a new message, it can easily map from the incoming host/port to its metadata. Or your protocol can require that on each incoming message, the will client send the name it registered with as a field.
Handling the request/response can be done several ways. First lets deal with the networking part of it on the server side. One way to manage this, as you mentioned, is by using select (or poll, or epoll) to multiplex the sockets. This is actually usually considered the more complicated way to do things. Another way is to spawn off a thread (or fork a process, which is less common these days) for each incoming client. Each spawned thread can read its own assigned socket, responding to messages one at a time without worrying about the fact that there are other clients besides the own it is dealing with. This simple one to one thread to socket model breaks down if there are many clients, but if that is not the case, then it is worth consideration.
Part 2 really covers only the client sending the server a message, and the server replying. What happens when the server wants to initiate communication? How does it do it and how does the client handle it? Also, how do you model the model the communication at the application level, meaning assuming we have the read/write part down, how do we know what to send? You will probably want to model things in terms of state machines. There is also a lot more to deal with like what happens when a client crashes? What about when the server crashes? Also, what if you really have your heart set of using select, perhaps because you expect many client? I will try to add more to this answer tomorrow.

Best approach to non blocking server/listening socket in a multi-thread application on Windows?

I'm writing a TCP server/client application on Windows, to become familiar with the Winsock API. I come from an UNIX background and would like to know which of these could be the best approach to implement the application:
First the specification
Must scale well on multiprocessor and single-processor systems.
No hardset limit of connections.
Application can both listen for connections, acting as server, and act as client.
Multi threaded.
First approach:
Non-blocking select-like socket for listening, in the 'server' thread.
for each client connecting we spawn a separate thread.
Second approach:
Blocking socket for listening, in the 'server' thread.
for each client connecting we spawn a separate thread.
Third approach:
Non-blocking select-like socket for listening, in the 'server' thread.
No separate thread for each incoming connection, the protocol would need state information kept across sessions I suppose.
I wonder what is the most efficient and scalable approach, and especially if it can work with a UDP socket too.
Note: I'm writing the application in plain and old C. No .NET nor C++ involved, C++ exceptions disabled too.
As Gary says, I/O Completion Ports are the most efficient way to manage multiple network connections in a non-blocking/async manner on Windows platforms.
With IOCP you get notified when your networking operations complete and you can process these completions with a small number of threads. You get to decide how many threads you allocate to process the completions and the kernel decides when to use the threads that you're providing. It uses them in a LIFO order, to reduce context switching, so that if you are only using the minimal number of threads required at any point and you're reusing the same threads rather than cycling through all of the threads that you have available for use.
The asynchronous nature of IOCP programming can be a little confusing to start with, but once you get the hang of it it's fairly straight forward.
I have some free IOCP server code which demonstrates the basics and provides some example servers that are pretty easy to build on. You can find the code here: http://www.serverframework.com/products---the-free-framework.html. That page also links to some articles that I wrote to explain the code.
Relating this to the detail of your question. You should be looking at a variation on your third approach. Use AcceptEx() to accept new connections, this can be used in an asynchronous manner and so you don't need a separate thread for connection acceptance and can use the threads that are also processing your overlapped/async read and write operations.
I've written an asynchronous client which does not use blocking sockets, so if you're interested in that approach, then take a look at my client: http://codesprout.blogspot.com/2011/04/asynchronous-http-client.html
It's an HTTP client, but I've shown very little HTTP protocol processing in there, it's all just .NET sockets. The server would work in a similar way: you can take advantage of the *Async methods such as AsseptAsync.
Under Windows, the best performances are achieved by using I/O completion calls.
This is because the lists and queuing mechanism is done in the kernel, far from the heavy user-mode overhead (which drags your code down if you dare to do the hard work yourself).
Unfortunately, Windows I/O completion calls need to allocate many threads to scale and this is quickly killing the performances (as compared to Linux epoll which can scale independently of the number of worker threads you decide to involve in the task).
Recently, I discovered http://gwan.com/ a Web server which came from Windows and was then ported under Linux. And their authors describe the problem in details on their forum.

Server Architecture for Embedded Device

I am working on a server application for an embedded ARM platform. The ARM board is connected to various digital IOs, ADCs, etc that the system will consistently poll. It is currently running a Linux kernel with the hardware interfaces developed as drivers. The idea is to have a client application which can connect to the embedded device and receive the sensory data as it is updated and issue commands to the device (shutdown sensor 1, restart sensor 2, etc). Assume the access to the sensory devices is done through typical ioctl.
Now my question relates to the design/architecture of this server application running on the embedded device. At first I was thinking to use something like libevent or libev, lightweight C event handling libraries. The application would prioritize the sensor polling event (and then send the information to the client after the polling is done) and process client commands as they are received (over a typical TCP socket). The server would typically have a single connection but may have up to a dozen or so, but not something like thousands of connections. Is this the best approach to designing something like this? Of the two event handling libraries I listed, is one better for embedded applications or are there any other alternatives?
The other approach under consideration is a multi-threaded application in which the sensor polling is done in a prioritized/blocking thread which reads the sensory data and each client connection is handled in separate thread. The sensory data is updated into some sort of buffer/data structure and the connection threads handle sending out the data to the client and processing client commands (I supposed you would still need an event loop of sort in these threads to monitor for incoming commands). Are there any libraries or typical packages used which facilitate designing an application like this or is this something you have to start from scratch?
How would you design what I am trying to accomplish?
I would use a unix domain socket -- and write the library myself, can't see any advantages to using libvent since the application is tied to linux, and libevent is also for hundreds of connections. You can do all of what you are trying to do with a single thread in your daemon. KISS.
You don't need a dedicated master thread for priority queues you just need to write your threads so that it always processes high priority events before anything else.
In terms of libraries, you will possibly benifit from Google's protocol buffers (for serialization and representing your protocol) -- however it only has first class supports for C++, and the over the wire (serialization) format does a bit of simple bit shifting to numeric data. I doubt it will add any serious overhead. However an alternative is ASN.1 (asn1c).
My suggestion would be a modified form of your 2nd proposal. I would create a server that has two threads. One thread polling the sensors, and another for ALL of your client connections. I have used in embedded devices (MIPS) boost::asio library with great results.
A single thread that handles all sockets connections asynchronously can usually handle the load easily (of course, it depends on how many clients you have). It would then serve the data it has on a shared buffer. To reduce the amount and complexity of mutexes, I would create two buffers, one 'active' and another 'inactive', and a flag to indicate the current active buffer. The polling thread would read data and put it in the inactive buffer. When it finished and had created a 'consistent' state, it would flip the flag and swap the active and inactive buffers. This could be done atomically and should therefore not require anything more complex than this.
This would all be very simple to set up since you would pretty much have only two threads that know nothing about the other.

Resources