"Sliding Window" - Is it possible to add reliability to a protocol and avoid flow control Implementation? [closed] - c

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
As a part of a personal project, I am making an application level protocol (encapsulated in UDP) which is reliable.
For the implementation of reliability, I have to keep track of which packets i send, and which packets are received at the other receiver end. This is done with the help of a sliding window, and it also maintains the flow-control.
Is there a way to implement reliability apart from standard sliding window/flow control technique.
If No, will someone share his experience/design rationale/code and discuss it over this post.
If Yes, have you implemented it, or if you know any implementation of the concept.

It's sad that the TCP/IP stack doesn't include a reliable datagram protocol, but it just doesn't. You can search for lots of attempts and proposals.
If this is a personal project, your time is probably the most scarce resource, and unless your goal is to reinvent this particular wheel, just build your protocol on top of TCP and move on.

After reading all these answers, learning from other implementations and reading some papers, I am writing/sharing what is most pertinent. First let me talk about Flow control, and later i will talk about reliability.
There are two kinds of flow control--
Rate based -- Packets Transmission is done in a timely manner.
Window based -- The Standard Window based, can static or dynamic (sliding window).
Rate Based flow controls are difficult to implement, as they are based on RTT(round trip time -- its not as simple as ping's RTT) calculation. If you decide on providing a proprietary congestion control system, and you are providing it from present release, then you can go for rate-based flow control. Explicit congestion control is also there which is router dependent,so its out of picture.
Window based flow control, a window is used to keep track of all the sent packet, until the sender is sure that receiver has received them. Static window is simple to implement but throughput will be miserable. Dynamic window (also know as sliding window) is a better implementation, but a little complex to implement, and depends on various kind of Acknowledgement mechanisms.
Now Reliability...
Reliability is making sure the receiver has received your packet/information. Which means that receiver has to tell the sender, yes I got it. This notification mechanism is called Acknowledgement.
Now one ofcourse needs throughput for the data transfered also. So you should be able to send as many packets as you can, rather MAX[sender's sending limit, receiver's receiving limit], provided you have available bandwidth at both the ends, and throughout the path.
If you combine all the knowledge, although reliability and flow control are different concepts, but implementation wise the underlying mechanism is best implemented, when you use a sliding windows.
so finally in short, I and anyone else if designing a new app protocol and needs it to reliable, sliding window shall be the way to achieve it. If you are planning to implement congestion control then u might as well use a hybrid(window+rate based) approach (uDT) for example.

I sort of agree with Nik on this one, it sounds like you're using UDP to do TCP's job, reliable transmission and flow control. However, sometime's their are reasons to do this yourself.
To answer you're questions, there are UDP based protocols that do reliable transmission and don't care much about ordering, meaning only dropped packets have a performance penalty for making it to the destination (by requiring retransmission).
The best example of this we use daily is a protocol called RADIUS, which we use for Authentication and Accounting on our EVDO network. Each packet between the source and destination get's an identifier field (radius only used 1 byte, you may want more), and each identifier needs to be acked. Now Radius doesn't really use the concept of a sliding window since it's really just a control plane traffic, but it's entirely viable to use the concept from TCP. So, because each packet needs to be acked, you buffer copies of outgoing packets until they are acked by the remote end point, and everyone is happy. Flow control can use feedback from this acknowledgement mechanism to know it can scale up/down the rate of packets, which may be most easily controlled by the size of the list you have for packets transmitted but awaiting acknowledgement.
Just keep in mind, there are literally decades of research into the use of sliding window and adjustments to TCP/IP stacks around packet loss and the sliding window, so you may have something that works, but it may be difficult to replicate the same thing you get in a highly tweaked stack on a modern OS.
The real benefit to this method is you need reliable transport, but you really don't need ordering, because you're sending disjoint pieces of information. This is where TCP/IP breaks down, because a dropped packet stop all packets from making it up the stack until the packet is retransmitted.

You could have a simple Send->Ack protocol. For every packet you require an Ack before proceding with the next (Effectivly this is windows size = 1 packet - which I wouldn't call a sliding window :-)

You could have something like the following:
Part 1: Initialization
1) Sender sends a packet with the # of packets to be sent. This packet may require some kind of control bit to be set or be a different size so that the receiver can distinguish it from regular packets.
2) Receiver sends ACK to Sender.
3) Repeat steps 1-2 until ACK received by Sender.
Part 2: Send bulk of data
4) Sender then sends all packets with a sequence number attached to the front of the data portion.
5) Receiver receives all the packets and arranges them as specified by the sequence numbers. The receiver keeps a data structure to keep track of which sequence numbers have been received.
Part 3: Send missing data
6) After some timeout period has elapsed with no more packets received, the Receiver sends a message to the Sender requesting the missing packets.
7) Sender sends the missing packets to the Receiver.
8) Repeat steps 6-7 until Receiver has received all required packets.
9) Receiver sends a special "Done" packet to the Sender.
10) Sender sends ACK to Receiver.
11) Repeat steps 10-11 until ACK is received by Receiver.
Data Structure
There are a few ways the Receiver can keep track of the missing sequence numbers. The most straight-forward way would be to keep a boolean for each sequence number, however this will be extremely inefficient. You may want to keep a list of missing packet ranges. For example, if there are 100 total packets, you would start with a list with one element [(1-100)]. Each time a packet is received, simply increment the number on the front of that range. Let's say you receive packets 1-12 successfully, miss packets 13-14, received 15-44, miss 45-46, then you end up with something like: [(13-14), (45-46)]. It would be quite easy to put this data structure into a packet and send it off to the Sender. You could maybe make it even better by using a tree instead, not sure.

You might want to look at SCTP - it's reliable and message-oriented.

It might be a better approach to maintain local state in the UDP applications to check that necessary data has been transferred for confirming reliability -- rather than, trying to do complete packet level reliability and flow-control.
Trying to replicate reliability and flow-control mechanisms of TCP in a UDP path is not a good answer at all.

Related

Using Broadcast State To Force Window Closure Using Fake Messages

Description:
Currently I am working on using Flink with an IOT setup. Essentially, devices are sending data such as (device_id, device_type, event_timestamp, etc) and I don't have any control over when the messages get sent. I then key the steam by device_id and device_type to preform aggregations. I would like to use event-time given that is ensures the timers which are set trigger in a deterministic nature given a failure. However, given that this isn't always a high throughput stream a window could be opened for a 10 minute aggregation period, but not have its next point come until approximately 40 minutes later. Although the calculation would aggregation would eventually be completed it would output my desired result extremely late.
So my work around for this is to create an additional external source that does nothing other than pump fake messages. By having these fake messages being pumped out in alignment with my 10 minute aggregation period, even if a device hadn't sent any data, the event time windows would have something to force the windows closed. The critical part here is to make it possible that all parallel instances / operators have access to this fake message because I need to close all the windows with this single fake message. I was thinking that Broadcast state might be the most appropriate way to accomplish this goal given: "Broadcast state is replicated across all parallel instances of a function, and might typically be used where you have two streams, a regular data stream alongside a control stream that serves rules, patterns, or other configuration messages." Quote Source
Questions:
Is broadcast state the best method for ensuring all parallel instances (e.g. windows) receive my fake messages?
Once the operators have access to this fake message via the broadcast state can this fake message then be used to advance the event time watermark?
You can make this work with broadcast state, along the lines you propose, but I'm not convinced it's the best solution.
In an ideal world I'd suggest you arrange for the devices to send occasional keepalive messages, but assuming that's not possible, I think a custom Trigger would work well here. You can extend the EventTimeTrigger so that in addition to the event time timer it creates via
ctx.registerEventTimeTimer(window.maxTimestamp());
you also create a processing time timer, as a fallback, and you FIRE the window if the window still exists when that processing time timer fires.
I'm recommending this approach because it's simpler and more directly addresses the specific need. With the broadcast state approach you'll have to introduce a source for these messages, add a broadcast state descriptor and stream, add special fake watermarks for the non-broadcast stream (set to Watermark.MAX_WATERMARK), connect the broadcast and non-broadcast streams and implement a BroadcastProcessFunction (that probably doesn't really do anything), etc. It's a lot of moving parts spread across several different operators.

What's the purpose of the serial parameter in the Wayland API?

I've been working with the Wayland protocol lately and many functions include a unit32_t serial parameter. Here's an example from wayland-client-protocol.h:
struct wl_shell_surface_listener {
/**
* ping client
*
* Ping a client to check if it is receiving events and sending
* requests. A client is expected to reply with a pong request.
*/
void (*ping)(void *data,
struct wl_shell_surface *wl_shell_surface,
uint32_t serial);
// ...
}
The intent of this parameter is such that a client would respond with a pong to the display server, passing it the value of serial. The server would compare the serial it received via the pong with the serial it sent with the ping.
There are numerous other functions that include such a serial parameter. Furthermore, implementations of other functions within the API often increment the global wl_display->serial property to obtain a new serial value before doing some work. My question is, what is the rationale for this serial parameter, in a general sense? Does it have a name? For example, is this an IPC thing, or a common practice in event-driven / asynchronous programming? Is it kind of like the XCB "cookie" concept for asynchronous method calls? Is this technique found in other programs (cite examples please)?
Another example is in glut, see glutTimerFunc discussed here as a "common idiom for asynchronous invocation." I'd love to know if this idiom has a name, and where (good citations please) it's discussed as a best practice or technique in asynchronous / even-driven programming, such as continuations or "signals and slots." Or, for example, how shared resource counts are just integers, but we consider them to be "semaphores."
You may find this helpful
Some actions that a Wayland client may perform require a trivial form
of authentication in the form of input event serials. For example, a
client which opens a popup (a context menu summoned with a right click
is one kind of popup) may want to "grab" all input events server-side
from the affected seat until the popup is dismissed. To prevent abuse
of this feature, the server can assign serials to each input event it
sends, and require the client to include one of these serials in the
request.
When the server receives such a request, it looks up the input event
associated with the given serial and makes a judgement call. If the
event was too long ago, or for the wrong surface, or wasn't the right
kind of event — for example, it could reject grabs when you wiggle the
mouse, but allow them when you click — it can reject the request.
From the server's perspective, they can simply send a incrementing
integer with each input event, and record the serials which are
considered valid for a particular use-case for later validation. The
client receives these serials from their input event handlers, and can
simply pass them back right away to perform the desired action.
https://wayland-book.com/seat.html#event-serials
As Hans Passant and Tom Zych state in the comments, the argument is distinguishes one asynchronous invocation from another.
I'm still curious about the deeper question, which is if this technique is one commonly used in asynchronous / event-driven software, and if it has a well-known name.

SIP protocol / call waiting

First i would like to apologize for my bad english, I wish you will understand my problem.
Here's my question, for my internship, I need to create a fonctionality that allows a caller to put his call in waiting, with a button, and to take the call back with that button again. And i think there's an option with SIP protocol that allows to do that, but i just can't find it, i searched in internet in some documentations, the only thing I might know and i'm not even sure is that it could be an option in a re-INVITE request, that can be send by the called or the caller one, if someone could help me ?
Thanks
The feature you are looking for is achieved by implementing the Call Hold Scenario on a SIP Call.
there are 3 ways to put the call on hold at the press of the button.
Generate a Re-INVITE SDP with SendOnly option - the answer shall contain a recvonly and in this case you can go ahead and inject hold music media through the rtp stream.
Sending inactive in the Re-INVITE SDP which basically puts the media inactive for the session. This is when no rtp exchange is desired.
Sending the 0.0.0.0 notation for the Re-INVITE SDP - This is the old deprecated format of call hold when IPV4 was still the norm [still is!!] but it makes sure the RTP doesn't have a ip to be sent.
All of these mechanisms rely on the basic methods and hence it shouldn't be very difficult to achieve using any client software.

Is FilterSendNetBufferLists handler a must for an NDIS filter to use NdisFSendNetBufferLists?

everyone, I am porting the WinPcap from NDIS6 protocol to NDIS6 filter. It is nearly finished, but I still have a bit of questions:
The comment of ndislwf said "A filter that doesn't provide a FilerSendNetBufferList handler can not originate a send on its own." Does it mean if I used the NdisFSendNetBufferLists function, I have to provide the FilerSendNetBufferList handler? My driver will send self-constructed packets by NdisFSendNetBufferLists, but I don't want to filter other programs' sent packets.
The same as the FilterReturnNetBufferLists, it said "A filter that doesn't provide a FilterReturnNetBufferLists handler cannot originate a receive indication on its own.". What does "originate a receive indication" mean? NdisFIndicateReceiveNetBufferLists or NdisFReturnNetBufferLists or both? Also, for my driver, I only want to capture received packets instead of the returned packets. So if possible, I don't want to provide the FilterReturnNetBufferLists function for performance purpose.
Another ressembled case is FilterOidRequestComplete and NdisFOidRequest, in fact my filter driver only want to send Oid requests itself by NdisFOidRequest instead of filtering Oid requests sent by others. Can I leave the FilterOidRequest, FilterCancelOidRequest and FilterOidRequestComplete to NULL? Or which one is a must to use NdisFOidRequest?
Thx.
Send and Receive
A LWF can either be:
completely excluded from the send path, unable to see other protocols' send traffic, and unable to send any of its own traffic; or
integrated into the send path, able to see and filter other protocols' send and send-complete traffic, and able to inject its own traffic
It's an all-or-nothing model. Since you want to send your own self-constructed packets, you must install a FilterSendNetBufferLists handler and a FilterSendNetBufferListsComplete handler. If you're not interested in other protocols' traffic, then your send handler can be as simple as the sample's send handler — just dump everything into NdisFSendNetBufferLists without looking at it.
The FilterSendNetBufferListsComplete handler needs to be a little more careful. Iterate over all the completed NBLs and pick out the ones that you sent. You can identify the packets you sent by looking at NET_BUFFER_LIST::SourceHandle. Remove those from the stream (possibly reusing them, or just NdisFreeNetBufferList them). All the other packets then go up the stack via NdisFSendNetBufferListsComplete.
The above discussion also applies to the receive path. The only difference between send and receive is that on the receive path, you must pay close attention to the NDIS_RECEIVE_FLAGS_RESOURCES flag.
OID requests
Like the datapath, if you want to participate in OID requests at all (either filtering or issuing your own), you must be integrated into the entire OID stack. That means that you provide FilterOidRequest, FilterOidRequestComplete, and FilterCancelOidRequest handlers. You don't need to do anything special in these handlers beyond what the sample does, except again detecting OID requests that your filter originated in the oid-complete handler, and removing those from the stream (call NdisFreeCloneOidRequest on them).
Performance
Do not worry about performance here. The first step is to get it working. Even though the sample filter inserts itself into the send, receive, and OID paths; it's almost impossible to come up with any sort of benchmark that can detect the presence of the sample filter. It's extremely cheap to have do-nothing handlers in a filter.
If you feel very strongly about this, you can selectively remove your filter from the datapath with calls to NdisFRestartFilter and NdisSetOptionalHandlers(NDIS_FILTER_PARTIAL_CHARACTERISTICS). But I absolutely don't think you need the complexity. If you're coming from an NDIS 5 protocol that was capturing in promiscuous mode, you've already gotten a big perf improvement by switching to the native networking data structures (NDIS_PACKET->NBL) and eliminating the loopback path. You can leave additional fine-tuning to the next version.

How to write a simple text based protocol, preferably in C

I want to write a client program that communicates with the application server via standard TCP/IP. The client can speak to the application server and be authenticated by simply speaking in a specific text based protocol. The traffic will be encrypted, but there won't be username/password. If another application tries to communicate with the application server and if the application doesn't use the correct text based protocol, the application server will silently discard packets.
Waiting for suggestions.
You can use a simplified version of TLV (Tag Length Value).
The basic idea is to define a set of message types which are represented by a code of fixed size (the T for Tag). Depending the type of message the contents of it (the V for Value) can very so you specify its length (the L for Length) before the contents. The Length field also has fixed size
Suppose you have one message used to send user data to the server. You can define a message like:
0x01 0x0018 0x11 0x0003 tom 0x12 0x000F tom#hotmail.com
Tag: 0x10 User data. Length: 0x0018 Value: sub tags
Tag 0x11: user name Length: 0x0003 Value = tom
Tag 0x12: email. Length: 0x000F. Value = tom#hotmail.com
Edited:
I was about to forget: Merry Christmas :)
Take a look at BEEP.
You might also find some good examples at four.livejournal.com; he's gotten good results writing an HTTP parser using the Ragel state machine generator, and also by hand.
if your not comfortable with the limitited functionality (verbs) provided HTTP just add more verbs. This is what the REST architecture is for.
If you want to continue down your path of folly (your talking about reinventing HTTPS), then use protocol buffers to create a protocol -- it will save you hours of grief.
-- edit --
If your objective is to understand the programming involved with web-servers, you might want to read apache's code dissected by the FMC group into a collection of models. I have read this PDF multiple times -- it is an absolute gold mine.
All the other comments are good, and stuff like BEEP, or doing some custom TLV encoding can get you along way, as well as using something like Google protocol buffers, but none of these are what I'd really call real simple.
A very simple text based protocol could just use a new line as the message delimiter. This is how IRC does it. Its not the most efficient, but if your messages are reasonably small it could work quite well. You could also prefix your message with a much shorter line telling the receiver how long the next message is.
If you want to use a light framework, look at libevent. It can assist in your IO and do line delimited reading for you.
If the language (protocol) is not already determined for you, then that is what you should design first, or look at something that already exists - XML, JSON chunks, netstrings, etc.
You can look at some of the sample code from TCP/IP Sockets in C.
It has many examples of doing client/server communication in C. Without more details, it's difficult to know what you really want to handle...
For communicating between bespoke apps, you can just send your text format in TCP packets. You can use an extremely simple text format, but you should make sure that it starts with some text that clearly identifies to your server that it is a packet from your client, and not from an imposter. (Clearly this is not terribly good security, but that's not the point of your question).
A good place to start is to use XML for your text-based format. This is dead simple to write/read, and is flexible and extensible so you can easily add more information to your packets at a later date - the biggest thing you can get wrong is to use a communications format that can't be extended!
Once you have basic comms working, you can enhance the format to send more information, add encryption and other security measures, and consider moving to a binary (more secure, more compact and efficient) format. BUt you can work your way to this stage in small easy steps.
So the right direction:
Get two programs talking via TCP. Just a simple packet with the text "bob" in it is enough at this stage, just to verify that the messaging is working. There are any number of simple tutorials on the web to get this going, and it's just a few lines of code once you work out what's needed.
Then build your packets. Start with the simplest approach that gives you a unique ID (to verify that the packet is from the right program) and a means to add new data to the packet easily in future. Xml is ideal for this. Don't worry about security, just concentrate on the actual "conversation" you wish to convey between the programs - what data they wish to exchange and how to encode it.
Step by step improve the communications protcol until it achieves what you want - smaller, faster, binary, more robust, fault tolerant, secure, etc. Each of these steps will be an interesting little challenge and by the time you've done them all you'll have learned a lot.
Look at the chapter on text protocols in 'The Art of UNIX Programming' by E S Raymond. It covers a lot of the relevant ideas at a high level, with good examples, and explanations of why they are good examples. It mentions BEEP.
I've recently read a book on this topic. It's called "TCP IP Sockets in C", by Michael J. Donahoo and Kenneth L. Calvert. You if you can afford it, it's a nice tutorial/reference book to have.
If you'd like you can try create the client<->server pair in Java, as it is easier to grasp the idea, and then rethink the solution at a lower level in C.

Resources