Handling asynchronous sockets in WinSock? - c

I'm using a message window and WSAAsyncSelect. How can I keep track of multiple sockets (the clients) with one message window?

Windows Supports several modes of socket operation, and you do need to be clear which one you are using:
Blocking sockets. send and recv block.
Non-Blocking sockets: send and recv return E_WOULDBLOCK, and select() is used to determine which sockets are ready
Asynchronous sockets: WSAAsyncSelect - sockets post event notifications to an HWND.
EventSockets: WSAEventSelect - sockets signal events.
Overlapped Sockets: WSASend and WSARecv are used with sockets by passing in the OVERLAPPED structures. Overlapped Sockets can be combined with IOCompletionPorts and provide the best scalability.
In terms of convenience, asynchronous sockets are simple, and supported by MFC CAsyncSocket class.
Event Sockets are tricky to use as the maximum number of objects passable to WaitForMultipleObjects is 64.
Overlapped sockets, with IO CompletionPorts, is the most scalable way to handle sockets and allows windows based servers to scale to tens of thousands of sockets.
In my experience, when using Async Sockets, the following things come to mind:
Handling FD events via window messages can handle "lots" of sockets, but performance will begin to suffer as all the event handling is done in one thread, serialized through a message queue that might be busy handling UI events too if used in a single threaded GUI app.
If you are hosting GUI windows or timers on the same thread as lots of sockets: WM_TIMER and WM_PAINT messages are low priority, and will only be generated if the message queue is empty. Very busy sockets can thus cause GUI painting, or SetTimer based timing to fail.
Creating a dedicated worker thread to handle your sockets if hosting a GUI solves these problems. Given that the worker thread will have a message loop, you can use the message queue for inter-thread comms - just post WM_APP messages to the thread.
The easiest way to map FD callbacks to your socket objects is to create an Array of SocketObjects for each HWND that will be receiving messages, and then use WM_USER+index as the message ID each time you call WASAsyncSelect. Then, when you receive messages in the range WM_USER to WM_USER+(array size) you can quickly extract the corresponding state object. WM_USER is 0x400, and WM_APP is 0x8000, so you can index up to 31744 sockets per message window using this method.
Don't use a static scope array. You need to associate the array with the window as you might want to create sockets on multiple threads. Each thread will need its own message loop, and message window.
HWND_MESSAGE is your friend

The wParam parameter of the window message that you tell WSAAsyncSelect() to send will specify the socket that triggered the message. This is clearly stated in the WSAAsyncSelect() documentation:
When one of the nominated network
events occurs on the specified socket
s, the application window hWnd
receives message wMsg. The wParam
parameter identifies the socket on
which a network event has occurred.
The low word of lParam specifies the
network event that has occurred. The
high word of lParam contains any error
code. The error code be any error as
defined in Winsock2.h.

Related

Windows - Wait on event and socket simulatenously

I'm writing Win32-API C code that needs to wait for new TCP connections and on the other side can be closed at any time by any other process/thread.
Therefore, I need to somehow WaitForSingleObject on the stop event and wait for connections using WSAAccept simultaneously.
I tried WaitForMultipleObjects on both socket and handle but new connection won't trigger the function (also WaitForSingleObject on the socket won't be triggered on a new connection).
Any idea?
You need to use WSAWaitForMultipleEvents. For sockets, here's some pseudo code:
HANDLE hEvent[1];
hEvent[0] = WSACreateEvent();
WSAEventSelect(hSocket, hEvent[0], FD_READ | FD_WRITE);
while (WSAWaitForMultipleEvents(...)) {
if (WSAEnumNetworkEvents(...)) { // Multiple events max exist
if (... & FD_ACCEPT) {
}
if (... & FD_WRITE) {
}
....
}
}
If you use multiple events (e.g. a stop event to signal the thread to stop), use the return value from the WSAWaitForMultipleEvents to determine the signalled event (as you do with WaitForMultipleObjects).
You cannot wait on socket handles directly.
WSAAccept() is synchronous, the only way to abort it is to close the listening socket.
For what you are attempting to do, use AcceptEx() instead, which is asynchronous and supports Overlapped I/O and I/O Completion Ports.
If you use Overlapped I/O, you can associate a standard Win32 event object to each Overlapped I/O capable socket operation (AcceptEx(), WSARecv(), WSASend(), etc), and use a standard Win32 event object for your stop event. And then you can use a WaitForMultipleObjects() loop to know which event(s) are signaled and act accordingly.
If you use an I/O Completion Port, you don't need event objects at all. You can associate each socket with a single IOCP queue, and your IOCP handler (either a call to GetQueuedCompletionStatus() or a callback function) will be notified whenever each IOCP capable socket operation completes. You can then use PostQueuedCompletionStatus() to post a custom stop message to the IOCP queue. You IOCP handler can act accordingly based on what kind of event it receives.

libuv combines mutliple async calls and invokes callback once

Requirement : A UDP server that on receiving an UDP packet and stores the received packet to one of the two queues. A worker thread is associated with each queue, and the associated thread picks up packet from the front of the queue, processes it and writes it into a in-memory cache system.
Constraints : solution has to be based on event-loop(libuv) and written in C
My Solution
register a call-back for incoming UDP which adds the received packet to one of the two queues and raises a uv_async_send
two global uv_sync_t objects are created one for each queue and used as parameter for uv_async_send. For ex : if packet is added to queue-one then uv_sync_t object-1 is used as parameter for uv_async_send. Similarly, if packet is added to queue-two then uv_sync_t object-2 is used
two threads are started, each having their own loop and a handle bound with a callback
In thread-one uv_sync_t object-1 is bound against a function (say funcA).
In thread-two uv_sync_t object-2 is bound against another function (say funcB)
funcA and funcB reads "SINGLE" packet from corresponding queue and stores it in the in-memory cache
The problem
The Client sends large number of packets which registers large number of events in the server. Now the problem is that libuv combines multiple calls into one and invokes a single callback(which removes a SINGLE node from queue). This leads to situation where nodes are being added to the queue at a faster rate and removed at a very slow rate. Can these rates be balanced ?
Is there a better way to design the server using event-looping library libuv ?
Since you are queueing the packets in one thread but processing in another, it's possible that they work at slightly different rates. I'd use a thread-safe queue (have a look at concurrencykit.org) and process the entire queue on the async callback, instead of just processing a single packet.

x number of threads sending data to Server for displaying output on GUI

I have developed a single server/multiple client TCP Application.
The client consists of x number of threads each thread doing processing on its own data and then sending the data over TCP socket to the Server for displaying.
The Server is basically a GUI having a window. Server receves data from the client and displays it.
Now, the problem is that since there are 40 threads inside the client and each thread wants to send data, how can I achieve this using one connected socket?
My Suggestion:
My approach was to create a data structure inside each of the 40 threads in which data to be sent will be maintained. A separate Send Thread with one connected socket on client side is then created. This thread will read data from data structure of first thread, send it over the socket and then read the data from second thread and so on.
Confusions:
but I am not sure how would this be implemented as I am new to all this? :( What if a thread is writing to data structure and the Send Thread tries to read the data at the same time. I am familiar with mutex, critical section etc but that sounds too complex for my simple application.
Any other suggestions/comments other than my own suggestion are welcome.
If you think my own approach is correct then please help me solving my confusions that I mentioned above.
Thanks a lot in advance :)
Edit:
Can I put I timer on Send Thread and after a specific time the Send Thread suspends thread#1(so that it can access its data structure without any synchronization issues), reads data from its data structure, sends it over the tcp Socket, and resumes Thread#1 back, then it suspends Thread#2, reads data from its data structure, sends it over the tcp Socket, and resumes Thread#2 back and so on.
A common approach is to have one thread dedicated to sending the data. The other threads post their data into a shared container (list, deque, etc) and signal the sender thread that data is available. The sender then wakes up and processes whatever data is available.
EDIT:
The gist of it is as follows:
HANDLE data_available_event; // manual reset event; set when queue has data, clear when queue is empty
CRITICAL_SECTION cs; // protect access to data queue
std::deque<std::string> data_to_send;
WorkerThread()
{
while(do_work)
{
std::string data = generate_data()
EnterCriticalSection(&cs);
data_to_send.push_back(data);
SetEvent(data_available_event); // signal sender thread that data is available
LeaveCriticalSection(&cs);
}
}
SenderThread()
{
while(do_work)
{
WaitForSingleObject(data_available_event);
EnterCriticalSection(&cs);
std::string data = data_to_send.front();
data_to_send.pop_front();
if(data_to_send.empty())
{
ResetEvent(data_available_event); // queue is empty; reset event and wait until more data is available
}
LeaveCriticalSection(&cs);
send_data(data);
}
}
This is of course assuming the data can be sent in any order. I use strings only for illustrative purposes; you probably want some kind of custom object that knows how to serialize the data it holds.
Suspending thread#1 so you can access its data strcuture does not avoid synchronization issues. When you suspend it thread#1 could be in the midst of an update to the data, so the socket thread gets part of old data, part of new. That is data corruption.
You need a shared data structure such as a FIFO queue. The worker threads add to the queue, the socket thread removes the oldest item from the queue. All access to this shared queue must be protected with a critical section unless you implement a lock-free queue. (A circular buffer.)
Depending on your application needs, if you implement this queue you might not need the socket thread at all. Just do the dequeueing in the display thread.
There are a couple of ways to achieving it; Luke's idea suffers from race conditions that will still create data corruption
You avoid that by using UDP instead of TCP as the transport protocol. It'd be especially a good choice if you don't mind missing an occasional packet (which is okay for displaying rapidly changing data); it's fantastic for ensuring real-time updates on data where exact history doesn't matter (missing a point in a relatively smooth curve while plotting graphs is okay);
If the data packets are are small and sort of represent a stream then UDP is a great choice. Its benefit increases if you have multiple senders on different systems all displaying on a single screen.

Not sure to understand why my server receives "channelInterestChanged" events in the frame decoder

I implemented my own frame decoder to parse the bytes received through a UDP socket (using NioDatagramChannelFactory and ConnectionlessBootstrap) according to our protocol.
Just to follow what is happening in the server while receiving messages, I added trace logs in each callback method of the decoder.
It appears that for almost every message the server receives, we can see that the event "channelInterestChanged" is received twice in the method channelInterestChanged(). The value of the event is first 0 (OP_NONE) then 1 (OP_READ).
I read the documentation about this, but I am still not sure to understand why I receive such events. I first through it was because the receive buffer (or the selector queue) was full, but the server receives this event the same number of times it receives the "messageReceived" event (before the decode() method is called) and all the messages/frames are properly decoded as expected. When messages are missing, I do no see any event at all. In this case it is probably because the receive buffer of the datagram socket is full. But even if I increase this receive buffer, I continue to see these events and to miss messages.
So, I am wondering why for each message received, the server also receives two "channelInterestChanged", one with the OP_NONE value and one with the OP_READ value. Please, takes note also that in the channel pipeline, after my frame decoder, there is an ExecutionHandler and another business-specific handler (which sends a JMS message to an ActiveMQ instance).
Any idea or explanation for me?
Thank you.
When a DownStreamChannelStateEvent fired from a handler (e.g calling channel.setReadable(), channel.setWriteable()), the event will change the channel's nio selector key's interested option in the NioDatagramWorker, later, a UpstreamChannelStateEvent will be fired with changed option (i.e OP_READ or OP_NONE)
Your frame decoder handler receives UpstreamChannelStateEvents because, some other handlers in the pipeline are changing the channel's read interest options (the purpose of calling channel.setReadable/setWriteable is, throttling the read/write to avoid congestion, OutOfMemoryError in the application).
If you have any MemoryAwareThreadPoolExecutor in your pipeline (which monitors the size of the channel memory used), it may suspend or resume reading by calling channel.setReadable() any time if the channel receives messages too fast. You may have to configure the MATPE instance with optimum maxChannelMemorySize, maxTotalMemorySize or disable it by setting it to 0.

Is Socket.SendAsync thread safe effectively?

I was fiddling with Silverlight's TCP communication and I was forced to use the System.Net.Sockets.Socket class which, on the Silverlight runtime has only asynchronous methods.
I was wondering what happens if two threads call SendAsync on a Socket instance in a very short time one from the other?
My single worry is to not have intermixed bytes going through the TCP channel.
Being an asynchronous method I suppose the message gets placed in a queue from which a single thread dequeues so no such things will happen (intermixing content of the message on the wire).
But I am not sure and the MSDN does not state anything in the method's description. Is anyone sure of this?
EDIT1 : No, locking on an object before calling SendAsync such as :
lock(this._syncObj)
{
this._socket.SendAsync(arguments);
}
will not help since this serializes the requests to send data not the data actually sent.
In order to call the SendAsync you need first to have called ConnectAsync with an instance of SocketAsyncEventArgs. Its the instance of SocketAsyncEventArgs which represents the connection between the client and server. Calling SendAsync with the same instance of SocketAsyncEventArgs that has just been used for an outstanding call to SendAsync will result in an exception.
It is possible to make multiple outstanding calls to SendAsync of the same Socket object but only using different instances of SocketAsyncEventArgs. For example (in a parallel universe where this might be necessay) you could be making multiple HTTP posts to the same server at the same time but on different connections. This is perfectly acceptable and normal neither client nor server will get confused about which packet is which.

Resources