What's the purpose of the serial parameter in the Wayland API? - c

I've been working with the Wayland protocol lately and many functions include a unit32_t serial parameter. Here's an example from wayland-client-protocol.h:
struct wl_shell_surface_listener {
/**
* ping client
*
* Ping a client to check if it is receiving events and sending
* requests. A client is expected to reply with a pong request.
*/
void (*ping)(void *data,
struct wl_shell_surface *wl_shell_surface,
uint32_t serial);
// ...
}
The intent of this parameter is such that a client would respond with a pong to the display server, passing it the value of serial. The server would compare the serial it received via the pong with the serial it sent with the ping.
There are numerous other functions that include such a serial parameter. Furthermore, implementations of other functions within the API often increment the global wl_display->serial property to obtain a new serial value before doing some work. My question is, what is the rationale for this serial parameter, in a general sense? Does it have a name? For example, is this an IPC thing, or a common practice in event-driven / asynchronous programming? Is it kind of like the XCB "cookie" concept for asynchronous method calls? Is this technique found in other programs (cite examples please)?
Another example is in glut, see glutTimerFunc discussed here as a "common idiom for asynchronous invocation." I'd love to know if this idiom has a name, and where (good citations please) it's discussed as a best practice or technique in asynchronous / even-driven programming, such as continuations or "signals and slots." Or, for example, how shared resource counts are just integers, but we consider them to be "semaphores."

You may find this helpful
Some actions that a Wayland client may perform require a trivial form
of authentication in the form of input event serials. For example, a
client which opens a popup (a context menu summoned with a right click
is one kind of popup) may want to "grab" all input events server-side
from the affected seat until the popup is dismissed. To prevent abuse
of this feature, the server can assign serials to each input event it
sends, and require the client to include one of these serials in the
request.
When the server receives such a request, it looks up the input event
associated with the given serial and makes a judgement call. If the
event was too long ago, or for the wrong surface, or wasn't the right
kind of event — for example, it could reject grabs when you wiggle the
mouse, but allow them when you click — it can reject the request.
From the server's perspective, they can simply send a incrementing
integer with each input event, and record the serials which are
considered valid for a particular use-case for later validation. The
client receives these serials from their input event handlers, and can
simply pass them back right away to perform the desired action.
https://wayland-book.com/seat.html#event-serials

As Hans Passant and Tom Zych state in the comments, the argument is distinguishes one asynchronous invocation from another.
I'm still curious about the deeper question, which is if this technique is one commonly used in asynchronous / event-driven software, and if it has a well-known name.

Related

Using Broadcast State To Force Window Closure Using Fake Messages

Description:
Currently I am working on using Flink with an IOT setup. Essentially, devices are sending data such as (device_id, device_type, event_timestamp, etc) and I don't have any control over when the messages get sent. I then key the steam by device_id and device_type to preform aggregations. I would like to use event-time given that is ensures the timers which are set trigger in a deterministic nature given a failure. However, given that this isn't always a high throughput stream a window could be opened for a 10 minute aggregation period, but not have its next point come until approximately 40 minutes later. Although the calculation would aggregation would eventually be completed it would output my desired result extremely late.
So my work around for this is to create an additional external source that does nothing other than pump fake messages. By having these fake messages being pumped out in alignment with my 10 minute aggregation period, even if a device hadn't sent any data, the event time windows would have something to force the windows closed. The critical part here is to make it possible that all parallel instances / operators have access to this fake message because I need to close all the windows with this single fake message. I was thinking that Broadcast state might be the most appropriate way to accomplish this goal given: "Broadcast state is replicated across all parallel instances of a function, and might typically be used where you have two streams, a regular data stream alongside a control stream that serves rules, patterns, or other configuration messages." Quote Source
Questions:
Is broadcast state the best method for ensuring all parallel instances (e.g. windows) receive my fake messages?
Once the operators have access to this fake message via the broadcast state can this fake message then be used to advance the event time watermark?
You can make this work with broadcast state, along the lines you propose, but I'm not convinced it's the best solution.
In an ideal world I'd suggest you arrange for the devices to send occasional keepalive messages, but assuming that's not possible, I think a custom Trigger would work well here. You can extend the EventTimeTrigger so that in addition to the event time timer it creates via
ctx.registerEventTimeTimer(window.maxTimestamp());
you also create a processing time timer, as a fallback, and you FIRE the window if the window still exists when that processing time timer fires.
I'm recommending this approach because it's simpler and more directly addresses the specific need. With the broadcast state approach you'll have to introduce a source for these messages, add a broadcast state descriptor and stream, add special fake watermarks for the non-broadcast stream (set to Watermark.MAX_WATERMARK), connect the broadcast and non-broadcast streams and implement a BroadcastProcessFunction (that probably doesn't really do anything), etc. It's a lot of moving parts spread across several different operators.

What is difference between MQTTAsync_onSuccess and MQTTAsync_deliveryComplete callbacks?

I'm learning about MQTT (specifically the paho C library) by reading and experimenting with variations on the async pub/sub examples.
What's the difference between the MQTTAsync_deliveryComplete callback that you set with MQTTAsync_setCallbacks() vs. the MQTTAsync_onSuccess or MQTTAsync_onSuccess5 callbacks that you set in the MQTTAsync_responseOptions struct that you pass to MQTTAsync_sendMessage() ?
All seem to deal with "successful delivery" of published messages, but from reading the example code and doxygen, I can't tell how they relate to or conflict with or supplement each other. Grateful for any guidance.
Basically MQTTAsync_deliveryComplete and MQTTAsync_onSuccess do the same, they notify you via callback about the delivery of a message. Both callbacks are executed asynchronously on a separate thread to the thread on which the client application is running.
(Both callbacks are even using the same thread in the case of the current version of the Paho client, but this is a non-documented implementation detail. This thread used by MQTTAsync_deliveryComplete and MQTTAsync_onSuccess is of course not the application thread otherwise it would not be an asynchronous callback).
The difference is that MQTTAsync_deliveryComplete callback is set once via MQTTAsync_setCallbacks and then you are informed about every delivery of a message.
In contrast to this, the MQTTAsync_onSuccess informs you once for exactly the message that you send out via MQTTAsync_sendMessage().
You can even define both callbacks, which will both be called when a message is delivered.
This gives you the flexibility to choose the approach that best suits your needs.
Artificial example
Suppose you have three different functions, each sending a specific type of message (e.g. sendTemperature(), sendHumidity(), sendAirPressure()) and in each function you call MQTTAsync_sendMessage, and after each delivery you want to call a matching callback function, then you would choose MQTTAsync_onSuccess. Then you do not need to keep track of MQTTAsync_token and associate that with your callbacks.
For example, if you want to implement a logging function instead, it would be more useful to use MQTTAsync_deliveryComplete because it is called for every delivery.
And of course one can imagine that one would want to have both the specific one with some actions and the generic one for logging, so in this case both variants could be used at the same time.
Documentation
You should note that MQTTAsync_deliveryComplete explicitly states in its documentation that it takes into account the Quality of Service Set. This is not the case in the MQTTAsync_onSuccess documentation, but of course it does not mean that this is not done in the implementation. But if this is important, you should explicitly check the source code.

SIP protocol / call waiting

First i would like to apologize for my bad english, I wish you will understand my problem.
Here's my question, for my internship, I need to create a fonctionality that allows a caller to put his call in waiting, with a button, and to take the call back with that button again. And i think there's an option with SIP protocol that allows to do that, but i just can't find it, i searched in internet in some documentations, the only thing I might know and i'm not even sure is that it could be an option in a re-INVITE request, that can be send by the called or the caller one, if someone could help me ?
Thanks
The feature you are looking for is achieved by implementing the Call Hold Scenario on a SIP Call.
there are 3 ways to put the call on hold at the press of the button.
Generate a Re-INVITE SDP with SendOnly option - the answer shall contain a recvonly and in this case you can go ahead and inject hold music media through the rtp stream.
Sending inactive in the Re-INVITE SDP which basically puts the media inactive for the session. This is when no rtp exchange is desired.
Sending the 0.0.0.0 notation for the Re-INVITE SDP - This is the old deprecated format of call hold when IPV4 was still the norm [still is!!] but it makes sure the RTP doesn't have a ip to be sent.
All of these mechanisms rely on the basic methods and hence it shouldn't be very difficult to achieve using any client software.

Is FilterSendNetBufferLists handler a must for an NDIS filter to use NdisFSendNetBufferLists?

everyone, I am porting the WinPcap from NDIS6 protocol to NDIS6 filter. It is nearly finished, but I still have a bit of questions:
The comment of ndislwf said "A filter that doesn't provide a FilerSendNetBufferList handler can not originate a send on its own." Does it mean if I used the NdisFSendNetBufferLists function, I have to provide the FilerSendNetBufferList handler? My driver will send self-constructed packets by NdisFSendNetBufferLists, but I don't want to filter other programs' sent packets.
The same as the FilterReturnNetBufferLists, it said "A filter that doesn't provide a FilterReturnNetBufferLists handler cannot originate a receive indication on its own.". What does "originate a receive indication" mean? NdisFIndicateReceiveNetBufferLists or NdisFReturnNetBufferLists or both? Also, for my driver, I only want to capture received packets instead of the returned packets. So if possible, I don't want to provide the FilterReturnNetBufferLists function for performance purpose.
Another ressembled case is FilterOidRequestComplete and NdisFOidRequest, in fact my filter driver only want to send Oid requests itself by NdisFOidRequest instead of filtering Oid requests sent by others. Can I leave the FilterOidRequest, FilterCancelOidRequest and FilterOidRequestComplete to NULL? Or which one is a must to use NdisFOidRequest?
Thx.
Send and Receive
A LWF can either be:
completely excluded from the send path, unable to see other protocols' send traffic, and unable to send any of its own traffic; or
integrated into the send path, able to see and filter other protocols' send and send-complete traffic, and able to inject its own traffic
It's an all-or-nothing model. Since you want to send your own self-constructed packets, you must install a FilterSendNetBufferLists handler and a FilterSendNetBufferListsComplete handler. If you're not interested in other protocols' traffic, then your send handler can be as simple as the sample's send handler — just dump everything into NdisFSendNetBufferLists without looking at it.
The FilterSendNetBufferListsComplete handler needs to be a little more careful. Iterate over all the completed NBLs and pick out the ones that you sent. You can identify the packets you sent by looking at NET_BUFFER_LIST::SourceHandle. Remove those from the stream (possibly reusing them, or just NdisFreeNetBufferList them). All the other packets then go up the stack via NdisFSendNetBufferListsComplete.
The above discussion also applies to the receive path. The only difference between send and receive is that on the receive path, you must pay close attention to the NDIS_RECEIVE_FLAGS_RESOURCES flag.
OID requests
Like the datapath, if you want to participate in OID requests at all (either filtering or issuing your own), you must be integrated into the entire OID stack. That means that you provide FilterOidRequest, FilterOidRequestComplete, and FilterCancelOidRequest handlers. You don't need to do anything special in these handlers beyond what the sample does, except again detecting OID requests that your filter originated in the oid-complete handler, and removing those from the stream (call NdisFreeCloneOidRequest on them).
Performance
Do not worry about performance here. The first step is to get it working. Even though the sample filter inserts itself into the send, receive, and OID paths; it's almost impossible to come up with any sort of benchmark that can detect the presence of the sample filter. It's extremely cheap to have do-nothing handlers in a filter.
If you feel very strongly about this, you can selectively remove your filter from the datapath with calls to NdisFRestartFilter and NdisSetOptionalHandlers(NDIS_FILTER_PARTIAL_CHARACTERISTICS). But I absolutely don't think you need the complexity. If you're coming from an NDIS 5 protocol that was capturing in promiscuous mode, you've already gotten a big perf improvement by switching to the native networking data structures (NDIS_PACKET->NBL) and eliminating the loopback path. You can leave additional fine-tuning to the next version.

How do I go about not freezing a GTK button that processes information after clicking it?

I'm guessing I'm going to need to do threading but before I teach myself some bad practices I wanted to make sure I'm going about this the correct way.
Basically I have a "chat" application that can be told to listen or ping the recipients' ip address:port (in my current case just 127.0.0.1:1300). When I open up my application twice (the first one to listen, the second to send a ping) I pick one and tell it to listen(Which is a While statement that just constantly listens until it gets a ping message) and the other one will ping it. It works just peachy!
The problem is when I click the "Listen for ping" button it will go into a glued "down" mode and freeze up "visually" however it prints the UDP packet message to the console so i know its not actually frozen. So my question is how to I make it so I can click the "Listen" button and have it "listen" while at the same time have a "working" cancel button so the user can cancel the process if its taking too long?
This most likely happens because you use synchronous (blocking) socket IO. Your server application most likely blocks on the recv()/read(), which blocks your thread's execution until some data arrives; it then processes the data and returns to blocked state. Hence, your button is rendered by GTK as pushed.
There are, basically, two generic approaches to this problem. The first one is threading. But I would recommend against it in the simpler applications; this approach is generally error-prone and pretty complicated to implement properly.
The second approach is asynchronous IO. First, you may use select()/poll() functions to wait for one of multiple FDs to be signalled (on such events as 'data received', 'data sent', 'connection accepted'). But in a GUI application where the main loop is not immediately available (I'm not sure about GTK, but this is the case in many GUI toolkits), this is usually impossible. In such cases, you may use generic asynchronous IO libraries (like boost asio). With GLIB, IIRC, you can create channels for socket interaction (g_io_channel_unix_new()) and then assign callbacks to them (g_io_add_watch()) which will be called when something interesting happens.
The idea behind asynchronous IO is pretty simple: you ask the OS to do something (send data, wait for events) and then you do other important things (GUI interaction, etc.) until something you requested is done (you have to be able to receive notifications of such events).
So, here's what you may want to study next:
select()/poll() (the latter is generally easier to use)
boost asio library
GLIB channels and asynchronous IO

Resources