Not sure to understand why my server receives "channelInterestChanged" events in the frame decoder - nio

I implemented my own frame decoder to parse the bytes received through a UDP socket (using NioDatagramChannelFactory and ConnectionlessBootstrap) according to our protocol.
Just to follow what is happening in the server while receiving messages, I added trace logs in each callback method of the decoder.
It appears that for almost every message the server receives, we can see that the event "channelInterestChanged" is received twice in the method channelInterestChanged(). The value of the event is first 0 (OP_NONE) then 1 (OP_READ).
I read the documentation about this, but I am still not sure to understand why I receive such events. I first through it was because the receive buffer (or the selector queue) was full, but the server receives this event the same number of times it receives the "messageReceived" event (before the decode() method is called) and all the messages/frames are properly decoded as expected. When messages are missing, I do no see any event at all. In this case it is probably because the receive buffer of the datagram socket is full. But even if I increase this receive buffer, I continue to see these events and to miss messages.
So, I am wondering why for each message received, the server also receives two "channelInterestChanged", one with the OP_NONE value and one with the OP_READ value. Please, takes note also that in the channel pipeline, after my frame decoder, there is an ExecutionHandler and another business-specific handler (which sends a JMS message to an ActiveMQ instance).
Any idea or explanation for me?
Thank you.

When a DownStreamChannelStateEvent fired from a handler (e.g calling channel.setReadable(), channel.setWriteable()), the event will change the channel's nio selector key's interested option in the NioDatagramWorker, later, a UpstreamChannelStateEvent will be fired with changed option (i.e OP_READ or OP_NONE)
Your frame decoder handler receives UpstreamChannelStateEvents because, some other handlers in the pipeline are changing the channel's read interest options (the purpose of calling channel.setReadable/setWriteable is, throttling the read/write to avoid congestion, OutOfMemoryError in the application).
If you have any MemoryAwareThreadPoolExecutor in your pipeline (which monitors the size of the channel memory used), it may suspend or resume reading by calling channel.setReadable() any time if the channel receives messages too fast. You may have to configure the MATPE instance with optimum maxChannelMemorySize, maxTotalMemorySize or disable it by setting it to 0.

Related

GetMessageW is blocking the calling thread, receiving no messages

I've been struggling with this for a day now and I can't figure out what is wrong with my code. I'm coding in Rust but this is more of a Windows' api related problem.
// first, I'm installing a keyboard hook for the current thread
let hook = SetWindowsHookExW(
WH_KEYBOARD_LL,
Some(low_level_keyboard_proc), // just forwards the call with CallNextHookEx
ptr::null_mut(),
0,
);
assert!(!hook.is_null(), "Failed to install the hook");
let mut message: MSG = mem::zeroed();
GetMessageW(&mut message, ptr::null_mut(), 0, 0);
// The GetMessageW function is known to block the calling thread until a new message is sent.
// The thing is: my `low_level_keyboard_proc` handle *does get* called, so I know events are being received.
// I don't understand why the GetMessageW function never returns even though events are being processed.
// Note that my handler does not get called when I remove the GetMessageW function.
println!("Unreachable code...");
UnhookWindowsHook(hook);
I tried to use the PeekMessageW function instead but the problem is the same: the function always return FALSE (no events received) even though the handler is getting properly called.
If I remove the SetWindowsHookExW part, GetMessageW is still blocking the thread BUT if I remove the GetMessageW part and put an infinite loop it its place, the handler does not get called anymore.
... so here is the question: why does the GetMessageW function never return? And if this behaviour is normal, how am I supposed to use the message that I provide to GetMessageW.
I'm assuming I don't understand well the relationship between GetMessageW and SetWindowsHookExW.
EDIT: I understand that I can't catch the messages sent to the keyboard hook I created. Now, what would the "right" way to retrieve keyboard messages look like? Because it would be real handy to be able to get those messages directly from the message loop instead of having to send them back from the callback function to my main code using static structures.
I'm trying to create an event loop that can be used regardless of a context or the focus of a window. The idea is retrieving those messages directly from a message loop and dispatch them using a user-defined custom handler that can be used through safe rust code.
There are no window messages or thread messages being posted to the message queue of the thread that is installing the keyboard hook, so there are no messages for GetMessageW() to return TO YOU.
However, SetWindowsHookEx() uses its own messages internally when a low-level keyboard hook crosses thread/process boundaries. That is why you don't need to implement your hook in a DLL when hooking other applications. When a keyboard action occurs, a private message is sent TO THE SYSTEM targeting the thread that installed the hook.
That is why the installing thread needs a message loop. The simple act of performing message retrieval in your code is enough to get those internal messages dispatched properly, which is why your callback function is being called. You just won't see those private messages, which is why GetMessageW() blocks your code.
The same thing happens when you SendMessage() to a window across thread boundaries. The receiving thread needs a message loop in order for the message to be dispatched to the target window, even though the message doesn't go through the receiving thread's message queue. This is described in the SendMessage() documentation:
If the specified window was created by the calling thread, the window procedure is called immediately as a subroutine. If the specified window was created by a different thread, the system switches to that thread and calls the appropriate window procedure. Messages sent between threads are processed only when the receiving thread executes message retrieval code.
So, what happens with SetWindowsHookEx() is that it creates a hidden window for itself to receive its private messages, sent via SendMessage(), when keyboard activity is detected in a different thread/process and needs to be marshaled back to your installing thread. This is described in the LowLevelKeyboardProc documentation:
This hook is called in the context of the thread that installed it. The call is made by sending a message to the thread that installed the hook. Therefore, the thread that installed the hook must have a message loop.

Advice: trying to recognize when a device is not connected

I have some hard time trying to find a method to restart my state machine. In other words some part of what I ve got:
I have a module that when is powered up it stays for a debounce time of 0.5 s and then it goes in a state machine: first it send a string#anotherstring# then he start a timer of some period and when timer elapsed, it converts an analog signal/read a data (SPI,I2C) and sends that data followed by another #. The state machine goes back and start again the timer and send again the data ...
On another chip. I receive info from that module. So here is a state machine that complete the first string, second string, and then cumulates values in a buffer, again and again.
In some moment some external device ask for data, moment when the chip make some computation and sends it.
SO far so good. Every single part of this is working exept the part when the module is disconnected. Ok you may say no problem no data is send. Yes this is true, but what happens if the module is connected back. Until now to test my work I have reseted the chip disconect and connect the module. By doing this the chip is on the first state and the module goes from first state, everything is ok.
My qestion is how to determine when the device is disconected from the chip to restart the chip stat machine and to wait for the string#anotherstring# combination(first state).
Another question is how to determine if the communication is broken and not the power down. When putting back the comunication the data should be again send,preferably both modules to go from init state.
What I have in mind is to send some ack to the module from the chip. But I do not know exactly how. Basically I want this: when the module is disconected its state machine obviously start over and the chip state I want again to goes back to initial state.
if the comunication of the module is unplugged some how both statemachines to start over.
I do not know if I am clear with this. but please if there are questions ask. I will come with edits if I found something.
OTHER INFO: The module and the chip are some microcontrolers, the comunicaiton is UART.
Let me sketch a basic scheme you can use on the receiving side:
On your receiving side, you'll want to time-out if no valid/complete message is received within a reasonable time frame. This way, you'll detect when the module goes offline for whatever reason at any point in time.
The state machine that receives and processes the messages will also be reset in this case. This means you'll have a timer which, for example, is started when data is received and stopped when a message was correctly and fully received. If the timer times out, any message currently being received is declared invalid and discarded, and the receiver goes back to the start, looking for the next message.
Then, you'd have to implement in the receiver the code to detect when a message starts and/or ends. So if the module always starts by sending string#anotherstring# then the receiver will wait until it sees string# for example; anything else received is ignored by the receiver. Only after the expected prefix was detected the rest of the message receiving is done.
During the whole process, the receiver's messsage timeout timer is active and if any part of the message is not received in time the receiver assumes transmission problems and goes back to waiting for the start of the next message.

Is FilterSendNetBufferLists handler a must for an NDIS filter to use NdisFSendNetBufferLists?

everyone, I am porting the WinPcap from NDIS6 protocol to NDIS6 filter. It is nearly finished, but I still have a bit of questions:
The comment of ndislwf said "A filter that doesn't provide a FilerSendNetBufferList handler can not originate a send on its own." Does it mean if I used the NdisFSendNetBufferLists function, I have to provide the FilerSendNetBufferList handler? My driver will send self-constructed packets by NdisFSendNetBufferLists, but I don't want to filter other programs' sent packets.
The same as the FilterReturnNetBufferLists, it said "A filter that doesn't provide a FilterReturnNetBufferLists handler cannot originate a receive indication on its own.". What does "originate a receive indication" mean? NdisFIndicateReceiveNetBufferLists or NdisFReturnNetBufferLists or both? Also, for my driver, I only want to capture received packets instead of the returned packets. So if possible, I don't want to provide the FilterReturnNetBufferLists function for performance purpose.
Another ressembled case is FilterOidRequestComplete and NdisFOidRequest, in fact my filter driver only want to send Oid requests itself by NdisFOidRequest instead of filtering Oid requests sent by others. Can I leave the FilterOidRequest, FilterCancelOidRequest and FilterOidRequestComplete to NULL? Or which one is a must to use NdisFOidRequest?
Thx.
Send and Receive
A LWF can either be:
completely excluded from the send path, unable to see other protocols' send traffic, and unable to send any of its own traffic; or
integrated into the send path, able to see and filter other protocols' send and send-complete traffic, and able to inject its own traffic
It's an all-or-nothing model. Since you want to send your own self-constructed packets, you must install a FilterSendNetBufferLists handler and a FilterSendNetBufferListsComplete handler. If you're not interested in other protocols' traffic, then your send handler can be as simple as the sample's send handler — just dump everything into NdisFSendNetBufferLists without looking at it.
The FilterSendNetBufferListsComplete handler needs to be a little more careful. Iterate over all the completed NBLs and pick out the ones that you sent. You can identify the packets you sent by looking at NET_BUFFER_LIST::SourceHandle. Remove those from the stream (possibly reusing them, or just NdisFreeNetBufferList them). All the other packets then go up the stack via NdisFSendNetBufferListsComplete.
The above discussion also applies to the receive path. The only difference between send and receive is that on the receive path, you must pay close attention to the NDIS_RECEIVE_FLAGS_RESOURCES flag.
OID requests
Like the datapath, if you want to participate in OID requests at all (either filtering or issuing your own), you must be integrated into the entire OID stack. That means that you provide FilterOidRequest, FilterOidRequestComplete, and FilterCancelOidRequest handlers. You don't need to do anything special in these handlers beyond what the sample does, except again detecting OID requests that your filter originated in the oid-complete handler, and removing those from the stream (call NdisFreeCloneOidRequest on them).
Performance
Do not worry about performance here. The first step is to get it working. Even though the sample filter inserts itself into the send, receive, and OID paths; it's almost impossible to come up with any sort of benchmark that can detect the presence of the sample filter. It's extremely cheap to have do-nothing handlers in a filter.
If you feel very strongly about this, you can selectively remove your filter from the datapath with calls to NdisFRestartFilter and NdisSetOptionalHandlers(NDIS_FILTER_PARTIAL_CHARACTERISTICS). But I absolutely don't think you need the complexity. If you're coming from an NDIS 5 protocol that was capturing in promiscuous mode, you've already gotten a big perf improvement by switching to the native networking data structures (NDIS_PACKET->NBL) and eliminating the loopback path. You can leave additional fine-tuning to the next version.

What shall we do to get C# Silverlight Tcp Packet in one piece?

When we send a large amount of data to the client,its ReceiveAsync event is being called more than one time and in each time we get a few piece of the packet.
What shall we do to get C# Silverlight Tcp Packet in one piece and through one event?
Thank you in advance.
You can't. The very nature of TCP is that data gets broken up into packets. Keep receiving data until you've got the whole message (whatever that will be). Some options for this:
First send the size of the message before the message itself.
Close the connection when the message has been sent (so the client can basically read until the connection is closed)
Add a delimiter to indicate the end of the message
I generally dislike the final option, as it means "understanding" the message as you're reading it, which can be tricky - and may mean you need to add escape sequences etc if your delimiter can naturally occur within the message.

Is Socket.SendAsync thread safe effectively?

I was fiddling with Silverlight's TCP communication and I was forced to use the System.Net.Sockets.Socket class which, on the Silverlight runtime has only asynchronous methods.
I was wondering what happens if two threads call SendAsync on a Socket instance in a very short time one from the other?
My single worry is to not have intermixed bytes going through the TCP channel.
Being an asynchronous method I suppose the message gets placed in a queue from which a single thread dequeues so no such things will happen (intermixing content of the message on the wire).
But I am not sure and the MSDN does not state anything in the method's description. Is anyone sure of this?
EDIT1 : No, locking on an object before calling SendAsync such as :
lock(this._syncObj)
{
this._socket.SendAsync(arguments);
}
will not help since this serializes the requests to send data not the data actually sent.
In order to call the SendAsync you need first to have called ConnectAsync with an instance of SocketAsyncEventArgs. Its the instance of SocketAsyncEventArgs which represents the connection between the client and server. Calling SendAsync with the same instance of SocketAsyncEventArgs that has just been used for an outstanding call to SendAsync will result in an exception.
It is possible to make multiple outstanding calls to SendAsync of the same Socket object but only using different instances of SocketAsyncEventArgs. For example (in a parallel universe where this might be necessay) you could be making multiple HTTP posts to the same server at the same time but on different connections. This is perfectly acceptable and normal neither client nor server will get confused about which packet is which.

Resources