I'm writing Win32-API C code that needs to wait for new TCP connections and on the other side can be closed at any time by any other process/thread.
Therefore, I need to somehow WaitForSingleObject on the stop event and wait for connections using WSAAccept simultaneously.
I tried WaitForMultipleObjects on both socket and handle but new connection won't trigger the function (also WaitForSingleObject on the socket won't be triggered on a new connection).
Any idea?
You need to use WSAWaitForMultipleEvents. For sockets, here's some pseudo code:
HANDLE hEvent[1];
hEvent[0] = WSACreateEvent();
WSAEventSelect(hSocket, hEvent[0], FD_READ | FD_WRITE);
while (WSAWaitForMultipleEvents(...)) {
if (WSAEnumNetworkEvents(...)) { // Multiple events max exist
if (... & FD_ACCEPT) {
}
if (... & FD_WRITE) {
}
....
}
}
If you use multiple events (e.g. a stop event to signal the thread to stop), use the return value from the WSAWaitForMultipleEvents to determine the signalled event (as you do with WaitForMultipleObjects).
You cannot wait on socket handles directly.
WSAAccept() is synchronous, the only way to abort it is to close the listening socket.
For what you are attempting to do, use AcceptEx() instead, which is asynchronous and supports Overlapped I/O and I/O Completion Ports.
If you use Overlapped I/O, you can associate a standard Win32 event object to each Overlapped I/O capable socket operation (AcceptEx(), WSARecv(), WSASend(), etc), and use a standard Win32 event object for your stop event. And then you can use a WaitForMultipleObjects() loop to know which event(s) are signaled and act accordingly.
If you use an I/O Completion Port, you don't need event objects at all. You can associate each socket with a single IOCP queue, and your IOCP handler (either a call to GetQueuedCompletionStatus() or a callback function) will be notified whenever each IOCP capable socket operation completes. You can then use PostQueuedCompletionStatus() to post a custom stop message to the IOCP queue. You IOCP handler can act accordingly based on what kind of event it receives.
Related
libuv has a void uv_close(uv_handle_t* handle, uv_close_cb close_cb) method to close handles which takes a callback.
As the title says, is the handle active (in terms of I/O) before close_cb is called? For example, can a UDP handle fire a receive callback and a timer handle fire a timer callback before close_cb?
The closest thing in the documentation I could find is "Handles that wrap file descriptors are closed immediately but close_cb will still be deferred to the next iteration of the event loop." However, I'm not sure which handles fall into this criteria and more importantly, what "closed immediately" means exactly (stops all callbacks? stops only new callbacks? removed from the event loop entirely?).
The type of a libuv handle determines whether the handle would fire its dedicated callback even after un_close(handle) completed and before the uv_close_cb is invoked, so there's no general answer for that.
For example, uv_tcp_t may fire the callback connect_req->cb() or uv_udp_t may fire send_cb() (from uv_udp_send_t) again while many other types of handle don't (see uv__finish_close() in /src/unix/core.c).
All handles are no longer able to use as soon as uv_close() cleared the flag UV_HANDLE_ACTIVE of the given handle and set the flag UV_HANDLE_CLOSING. You are NOT allowed to use the closing handle by invoking corresponding functions immediately after uv_close() , if you still do so, the corresponding functions may either return error or give you assertion failure (e.g. uv_timer_start() and uv_poll_start())
uv_close() doesn't remove the handle from the event loop, instead it moves the closing handle to the internal list of the loop (only for later closing process), then you call uv_run() again, which will actually process all closing handles, depending on the type of closing handle, the closing handle fires its callback whenever necessary (e.g. uv__stream_destroy() in uv__finish_close()) , finally uv_run() invoke the callback uv_close_cb
(see the source at here)
When running an event loop in libuv using the uv_run function, there's a "mode" parameter that is used with the following values:
UV_RUN_DEFAULT
UV_RUN_ONCE
UV_RUN_NOWAIT
The first two are obvious. UV_RUN_DEFAULT runs the event loop until there are no more events, and UV_RUN_ONCE processing a single event from the loop. However, UV_RUN_NOWAIT doesn't seem to be a separate mode, but rather a flag that can be ORed with one of the other two values.
By default, this function blocks until events are done processing, and UV_RUN_NOWAIT makes it nonblocking, but any documentation I can find on it ends there. My question is, if you run the event loop nonblocking, how are callbacks handled?
The libuv event model is single-threaded (reactor pattern), so I'd assume it needs to block to be able to call the callbacks, but if the main thread is occupied, what happens to an event after it's processed? Will the callback be "queued" until libuv gets control of the main thread again? Or will the callbacks be dispatched on another thread?
Callbacks are handled in the same manner. They will run within the thread that is in uv_run().
Per the documentation:
UV_RUN_DEFAULT: Runs the event loop until the reference count drops to zero. Always returns zero.
UV_RUN_ONCE: Poll for new events once. Note that this function blocks if there are no pending events. Returns zero when done (no active handles or requests left), or non-zero if more events are expected (meaning you should run the event loop again sometime in the future).
UV_RUN_NOWAIT: Poll for new events once but don't block if there are no pending events.
Consider the case where a program has a single watcher listening to a socket. In this scenario, an event would be created when the socket has received data.
UV_RUN_DEFAULT will block the caller even if the socket does not have data. The caller will return from uv_run(), when either:
The loop has been explicitly stopped, via uv_stop()
No more watchers are running in the loop. For example, the only watcher has been stopped.
UV_RUN_ONCE will block the caller even if the socket does not have data. The caller will return from uv_run(), when any of the following occur:
The loop has been explicitly stopped, via uv_stop()
No more watchers are running in the loop. For example, the only watcher has been stopped.
It has handled a max of one event. For example, the socket received data, and the user callback has been invoked. Additional events may be ready to be handled, but will not be handled in the current uv_run() call.
UV_RUN_NOWAIT will return if the socket does not have data.
Often times, running an event-loop in a non-blocking manner is done to integrate with other event-loops. Consider an application that has two event loops: libuv for backend work and Qt UI (which is driven by its own event loop). Being able to run the event loop in a non-blocking manner allows for a single thread to dispatch events on both event-loops. Here is a simplistic overview showing two libuv loops being handled by a single thread:
uv_loop_t *loop1 = uv_loop_new();
uv_loop_t *loop2 = uv_loop_new();
// create, initialize, and start a watcher for each loop.
...
// Handle two event loops with a single thread.
while (uv_run(loop1, UV_RUN_NOWAIT) || uv_run(loop2, UV_RUN_NOWAIT));
Without using UV_RUN_NOWAIT, loop2 would only run once loop1 or loop1's watchers have been stopped.
For more information, consider reading the Advanced Event Loops and Processes sections of An Introduction to libuv.
epoll TCP with EPOLLOUT | EPOLLET gets event for one time and goes to time out even if I send data after getting the first event.
While in UDP epoll keeps getting EPOLLOUT events after sending new data.
Can you explain this issue?
EPOLLET is edge-triggered mode, which means it will only notify you of state transitions. In this case it will notify you when the file descriptor goes from not being writable to being writable. And the only way to make it not writable is to fill the outgoing buffer. So you need to just keep sending until you get EAGAIN, then you'll wait for a notification.
I'm using a message window and WSAAsyncSelect. How can I keep track of multiple sockets (the clients) with one message window?
Windows Supports several modes of socket operation, and you do need to be clear which one you are using:
Blocking sockets. send and recv block.
Non-Blocking sockets: send and recv return E_WOULDBLOCK, and select() is used to determine which sockets are ready
Asynchronous sockets: WSAAsyncSelect - sockets post event notifications to an HWND.
EventSockets: WSAEventSelect - sockets signal events.
Overlapped Sockets: WSASend and WSARecv are used with sockets by passing in the OVERLAPPED structures. Overlapped Sockets can be combined with IOCompletionPorts and provide the best scalability.
In terms of convenience, asynchronous sockets are simple, and supported by MFC CAsyncSocket class.
Event Sockets are tricky to use as the maximum number of objects passable to WaitForMultipleObjects is 64.
Overlapped sockets, with IO CompletionPorts, is the most scalable way to handle sockets and allows windows based servers to scale to tens of thousands of sockets.
In my experience, when using Async Sockets, the following things come to mind:
Handling FD events via window messages can handle "lots" of sockets, but performance will begin to suffer as all the event handling is done in one thread, serialized through a message queue that might be busy handling UI events too if used in a single threaded GUI app.
If you are hosting GUI windows or timers on the same thread as lots of sockets: WM_TIMER and WM_PAINT messages are low priority, and will only be generated if the message queue is empty. Very busy sockets can thus cause GUI painting, or SetTimer based timing to fail.
Creating a dedicated worker thread to handle your sockets if hosting a GUI solves these problems. Given that the worker thread will have a message loop, you can use the message queue for inter-thread comms - just post WM_APP messages to the thread.
The easiest way to map FD callbacks to your socket objects is to create an Array of SocketObjects for each HWND that will be receiving messages, and then use WM_USER+index as the message ID each time you call WASAsyncSelect. Then, when you receive messages in the range WM_USER to WM_USER+(array size) you can quickly extract the corresponding state object. WM_USER is 0x400, and WM_APP is 0x8000, so you can index up to 31744 sockets per message window using this method.
Don't use a static scope array. You need to associate the array with the window as you might want to create sockets on multiple threads. Each thread will need its own message loop, and message window.
HWND_MESSAGE is your friend
The wParam parameter of the window message that you tell WSAAsyncSelect() to send will specify the socket that triggered the message. This is clearly stated in the WSAAsyncSelect() documentation:
When one of the nominated network
events occurs on the specified socket
s, the application window hWnd
receives message wMsg. The wParam
parameter identifies the socket on
which a network event has occurred.
The low word of lParam specifies the
network event that has occurred. The
high word of lParam contains any error
code. The error code be any error as
defined in Winsock2.h.
I have a dll written in C.
I would like to send data to a socket and receive the answer in the same function.
e.g.:
BOOL SendToSocketAndRecv(...)
{
// ...
send(...);
retval = recv(...);
// ...
}
In another word, my dll should not follow Client Server pattren.
Is this possible ?
any help ?
Thank you - Khayralla
Yes
You may work in either blocking (synchronous) or non-blocking (asynchronous) mode. Depending on this you may or may not send more data before you receive something from the peer.
"Stream" sockets (like TCP) are "tunnels". If the peer sends several packets you may receive them in a single call to recv, and vice-versa - a sinle "message" from the peer may take several calls to recv. Hence you should read the message in a loop.
You have a lot to learn about network programming.
I am sending a commands to Robot and then wait to get answer
Yes, what you have will work.
But things start to get interesting when you factor in the chance that the robot will not respond for whatever reason. Then you need to provide for a timeout on the response. Soon other things start to creep in. For example, you may not want to be stuck in the read for the duration of the wait, because you may need to service other events (user input or other sources) as they comes in.
A common architecture to handle this is to use select() and make it the hub of all your incoming events. Then you drive a state machine (or machines) off these events. You end up with an event driven architecture. It would look something like this:
while(true)
{
select(fds for event sources, timeout);
if (timeout)
{
call robot state machine(timeout);
continue;
}
iterate through fds
{
if (fd has data)
{
read data into buf
if (fd is for robot)
{
call robot state machine(buf)
}
else if (fd is for source1)
{
call source1 state machine(buf)
}
...
}
}
}
In this model, sends can be done from anywhere in the code. But you wind up sitting in the select() after, waiting for events. Also, you will have to figure out the details of doing the correct timeout and select in general, but there is enough of that out there.
Yes this is both possible and legal. The API itself isn't concerned about being used from the same function.
not only is this possible, it is a classic coding idiom for a client in a client server system. Usually the function is called something like ExecuteRequest