Where data can be buffered on rfcomm connection? - c

i'm trying to pass data between two bluetooth devices, when both connected to two different computers.
After having hci device in each of the computers, i'm using rfcomm to pass information between the two.
I'm trying to pass 10MB of random data just to check the ability of the system.
On the beginning everything seems to work fine. After a few seconds, it looks like there is a delay between the sender and the receiver, when sometimes data stopps to arrive, and then a "massive" amount of data is suddenly arriving the receiver.
Exactly like some buffer is keeping all the data. As long i keep sending data
the delay is getting longer.
I'm trying to figure out where in the chain such buffer can be, or how to
solve this buffering.
Many Thanks :)

Related

Using Broadcast State To Force Window Closure Using Fake Messages

Description:
Currently I am working on using Flink with an IOT setup. Essentially, devices are sending data such as (device_id, device_type, event_timestamp, etc) and I don't have any control over when the messages get sent. I then key the steam by device_id and device_type to preform aggregations. I would like to use event-time given that is ensures the timers which are set trigger in a deterministic nature given a failure. However, given that this isn't always a high throughput stream a window could be opened for a 10 minute aggregation period, but not have its next point come until approximately 40 minutes later. Although the calculation would aggregation would eventually be completed it would output my desired result extremely late.
So my work around for this is to create an additional external source that does nothing other than pump fake messages. By having these fake messages being pumped out in alignment with my 10 minute aggregation period, even if a device hadn't sent any data, the event time windows would have something to force the windows closed. The critical part here is to make it possible that all parallel instances / operators have access to this fake message because I need to close all the windows with this single fake message. I was thinking that Broadcast state might be the most appropriate way to accomplish this goal given: "Broadcast state is replicated across all parallel instances of a function, and might typically be used where you have two streams, a regular data stream alongside a control stream that serves rules, patterns, or other configuration messages." Quote Source
Questions:
Is broadcast state the best method for ensuring all parallel instances (e.g. windows) receive my fake messages?
Once the operators have access to this fake message via the broadcast state can this fake message then be used to advance the event time watermark?
You can make this work with broadcast state, along the lines you propose, but I'm not convinced it's the best solution.
In an ideal world I'd suggest you arrange for the devices to send occasional keepalive messages, but assuming that's not possible, I think a custom Trigger would work well here. You can extend the EventTimeTrigger so that in addition to the event time timer it creates via
ctx.registerEventTimeTimer(window.maxTimestamp());
you also create a processing time timer, as a fallback, and you FIRE the window if the window still exists when that processing time timer fires.
I'm recommending this approach because it's simpler and more directly addresses the specific need. With the broadcast state approach you'll have to introduce a source for these messages, add a broadcast state descriptor and stream, add special fake watermarks for the non-broadcast stream (set to Watermark.MAX_WATERMARK), connect the broadcast and non-broadcast streams and implement a BroadcastProcessFunction (that probably doesn't really do anything), etc. It's a lot of moving parts spread across several different operators.

ArduPilot, Dronekit-Python, Mavproxy and Mavlink - Hunt for the Bottleneck

I have Ardupilot on plane, using 3DR Radio back to Raspberry Pi on the ground doing some advanced geo and attitude based maths, and providing audio feedback to pilot (rather than looking to screen).
I am using Dronekit-python, which in turn uses Mavproxy and Mavlink. What I am finding is that I am only getting new attitude data to the Pi at about 3hz - and I am not sure where the bottleneck is:
3DR is running at 57.6 khz and all happy
I have turned off the automatic push of logs from Ardupilot down to Pi (part of Mavproxy)
The Pi can ask for Attitude data (roll, yaw etc.) through the DroneKit Python API as often as it likes, but only gets new data (ie, a change in value) about every 1/3 second.
I am not deep enough inside the underlying architecture to understand what the bottleneck may be -- can anyone help? Is it likely a round trip message response time from base to plan and back (others seem to get around 8hz from Mavlink from what I have read)? Or latency across the combination of Mavproxy, Mavlink and Drone Kit? Or is there some setting inside Ardupilot or Telemetry that copuld be driving this.
I am aware this isn't necessarily a DroneKit issue, but not really sure where it goes as it spans quite a few components.
Requesting individual packets should work, but that was never meant to be requested lots of times per second.
In order to get a certain packet many times per second, set up streams. A stream will trigger a certain number of times per second, and will then send whichever packet is associated with it, automatically. The ATTITUDE message is in the group called EXTRA1.
Let's suppose you want to receive 10 ATTITUDE messages per second. The relevant parameter is called SR0_EXTRA1. This defines the number of Attitude packets sent per second. The default is 4. Try increasing that parameter to 10.

Is FilterSendNetBufferLists handler a must for an NDIS filter to use NdisFSendNetBufferLists?

everyone, I am porting the WinPcap from NDIS6 protocol to NDIS6 filter. It is nearly finished, but I still have a bit of questions:
The comment of ndislwf said "A filter that doesn't provide a FilerSendNetBufferList handler can not originate a send on its own." Does it mean if I used the NdisFSendNetBufferLists function, I have to provide the FilerSendNetBufferList handler? My driver will send self-constructed packets by NdisFSendNetBufferLists, but I don't want to filter other programs' sent packets.
The same as the FilterReturnNetBufferLists, it said "A filter that doesn't provide a FilterReturnNetBufferLists handler cannot originate a receive indication on its own.". What does "originate a receive indication" mean? NdisFIndicateReceiveNetBufferLists or NdisFReturnNetBufferLists or both? Also, for my driver, I only want to capture received packets instead of the returned packets. So if possible, I don't want to provide the FilterReturnNetBufferLists function for performance purpose.
Another ressembled case is FilterOidRequestComplete and NdisFOidRequest, in fact my filter driver only want to send Oid requests itself by NdisFOidRequest instead of filtering Oid requests sent by others. Can I leave the FilterOidRequest, FilterCancelOidRequest and FilterOidRequestComplete to NULL? Or which one is a must to use NdisFOidRequest?
Thx.
Send and Receive
A LWF can either be:
completely excluded from the send path, unable to see other protocols' send traffic, and unable to send any of its own traffic; or
integrated into the send path, able to see and filter other protocols' send and send-complete traffic, and able to inject its own traffic
It's an all-or-nothing model. Since you want to send your own self-constructed packets, you must install a FilterSendNetBufferLists handler and a FilterSendNetBufferListsComplete handler. If you're not interested in other protocols' traffic, then your send handler can be as simple as the sample's send handler — just dump everything into NdisFSendNetBufferLists without looking at it.
The FilterSendNetBufferListsComplete handler needs to be a little more careful. Iterate over all the completed NBLs and pick out the ones that you sent. You can identify the packets you sent by looking at NET_BUFFER_LIST::SourceHandle. Remove those from the stream (possibly reusing them, or just NdisFreeNetBufferList them). All the other packets then go up the stack via NdisFSendNetBufferListsComplete.
The above discussion also applies to the receive path. The only difference between send and receive is that on the receive path, you must pay close attention to the NDIS_RECEIVE_FLAGS_RESOURCES flag.
OID requests
Like the datapath, if you want to participate in OID requests at all (either filtering or issuing your own), you must be integrated into the entire OID stack. That means that you provide FilterOidRequest, FilterOidRequestComplete, and FilterCancelOidRequest handlers. You don't need to do anything special in these handlers beyond what the sample does, except again detecting OID requests that your filter originated in the oid-complete handler, and removing those from the stream (call NdisFreeCloneOidRequest on them).
Performance
Do not worry about performance here. The first step is to get it working. Even though the sample filter inserts itself into the send, receive, and OID paths; it's almost impossible to come up with any sort of benchmark that can detect the presence of the sample filter. It's extremely cheap to have do-nothing handlers in a filter.
If you feel very strongly about this, you can selectively remove your filter from the datapath with calls to NdisFRestartFilter and NdisSetOptionalHandlers(NDIS_FILTER_PARTIAL_CHARACTERISTICS). But I absolutely don't think you need the complexity. If you're coming from an NDIS 5 protocol that was capturing in promiscuous mode, you've already gotten a big perf improvement by switching to the native networking data structures (NDIS_PACKET->NBL) and eliminating the loopback path. You can leave additional fine-tuning to the next version.

Server client message verification

I have a question about communication between a client and a server. I would like to send data over a TCP unix socket (I know how to do this), and I don't know what is the best practises to test if the message sent is ready to be read entirely (not block per block).
Thus, I'm thinking of this
The client send the data printf(3) formatted, the message is written in a string and sent.
The server receive the message, but how to be sure the message if full ? Do I need to loop until the message is complete ?
So my idea is to use a code (or checksum maybe ?) that will be prepended and appended to the message like this :
[verification code] my_long_data_formatted [verification code]
And then, the server tries to read the data until the second verification code is read and succesfully checked.
Is this a proper solution for client / server communication ? If yes, what do you advise me for the verfications boundaries ?
TCP already has a built-in checksum/verification. So if the message was received, it was received correctly.
Typically then, the only thing you have to worry about is figuring out how long the message is. This is done either by sending the message length at the beginning or by putting a termination character or sequence at the end.
To make sure both the sender and receiver are "on the same page" so to speak, the receiver will typically send back a response after the message was received, even if that response only says "OK".
Examples of this technique include HTTP, SMTP, POP3, IMAP, and many others.
There are several ways to do this, so I don't think there is a "proper" solution. Your solution will probably work. The one note of caution is that you need to make sure the valiation code you chose is not sent as part of the data in the message. If that is the case, you will detect that the message is complete, even though it is really not the end of the message. If that is not possible to know what the data will look like, you may need to try a different technique.
From your description, it sounds like your messages are variable length. Another way would be to make all the messages the same length, that way you know how much data is to be read each time to get a complete message.
Another way would be to send over the length of the message first (such as a binary 32 bit number) which indicates the number of bytes to be read until the end of the message. You read this first to get the amount of data, and then read that amount from the socket.
If you had a set number of messages where the length was the same each time, you could assign a number to each message and send that number first, which you could then read. With that information you can determine how much data is to be read based on the assigned number to the message.
What you select to use for a solution will probably be based on factors like if messages are varaible or fixed in length and/or do you need to send additional information with the data. In this case, you might have a mixture where you send a fixed length header, which contains the information about the data which follows; either in length or the type of data which follows.
You need to establish an application-level protocol that would tell you somehow where application messages start and end in the byte stream provided to you by TCP (and then maybe how connected parties proceed with the conversation).
Popular choices are:
Fixed-length messages. Works well with binary data and is very simple.
Variable-size messages with fixed format or size header that tells exact size (and maybe type) of the rest of the message. Works with both binary and text data.
Delimited messages - some character(s) like new-line or \x1 are special and denote message boundaries. Best with text data.
Self-describing messages like XML, or S-expressions, or ASN.1.

"Sliding Window" - Is it possible to add reliability to a protocol and avoid flow control Implementation? [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
As a part of a personal project, I am making an application level protocol (encapsulated in UDP) which is reliable.
For the implementation of reliability, I have to keep track of which packets i send, and which packets are received at the other receiver end. This is done with the help of a sliding window, and it also maintains the flow-control.
Is there a way to implement reliability apart from standard sliding window/flow control technique.
If No, will someone share his experience/design rationale/code and discuss it over this post.
If Yes, have you implemented it, or if you know any implementation of the concept.
It's sad that the TCP/IP stack doesn't include a reliable datagram protocol, but it just doesn't. You can search for lots of attempts and proposals.
If this is a personal project, your time is probably the most scarce resource, and unless your goal is to reinvent this particular wheel, just build your protocol on top of TCP and move on.
After reading all these answers, learning from other implementations and reading some papers, I am writing/sharing what is most pertinent. First let me talk about Flow control, and later i will talk about reliability.
There are two kinds of flow control--
Rate based -- Packets Transmission is done in a timely manner.
Window based -- The Standard Window based, can static or dynamic (sliding window).
Rate Based flow controls are difficult to implement, as they are based on RTT(round trip time -- its not as simple as ping's RTT) calculation. If you decide on providing a proprietary congestion control system, and you are providing it from present release, then you can go for rate-based flow control. Explicit congestion control is also there which is router dependent,so its out of picture.
Window based flow control, a window is used to keep track of all the sent packet, until the sender is sure that receiver has received them. Static window is simple to implement but throughput will be miserable. Dynamic window (also know as sliding window) is a better implementation, but a little complex to implement, and depends on various kind of Acknowledgement mechanisms.
Now Reliability...
Reliability is making sure the receiver has received your packet/information. Which means that receiver has to tell the sender, yes I got it. This notification mechanism is called Acknowledgement.
Now one ofcourse needs throughput for the data transfered also. So you should be able to send as many packets as you can, rather MAX[sender's sending limit, receiver's receiving limit], provided you have available bandwidth at both the ends, and throughout the path.
If you combine all the knowledge, although reliability and flow control are different concepts, but implementation wise the underlying mechanism is best implemented, when you use a sliding windows.
so finally in short, I and anyone else if designing a new app protocol and needs it to reliable, sliding window shall be the way to achieve it. If you are planning to implement congestion control then u might as well use a hybrid(window+rate based) approach (uDT) for example.
I sort of agree with Nik on this one, it sounds like you're using UDP to do TCP's job, reliable transmission and flow control. However, sometime's their are reasons to do this yourself.
To answer you're questions, there are UDP based protocols that do reliable transmission and don't care much about ordering, meaning only dropped packets have a performance penalty for making it to the destination (by requiring retransmission).
The best example of this we use daily is a protocol called RADIUS, which we use for Authentication and Accounting on our EVDO network. Each packet between the source and destination get's an identifier field (radius only used 1 byte, you may want more), and each identifier needs to be acked. Now Radius doesn't really use the concept of a sliding window since it's really just a control plane traffic, but it's entirely viable to use the concept from TCP. So, because each packet needs to be acked, you buffer copies of outgoing packets until they are acked by the remote end point, and everyone is happy. Flow control can use feedback from this acknowledgement mechanism to know it can scale up/down the rate of packets, which may be most easily controlled by the size of the list you have for packets transmitted but awaiting acknowledgement.
Just keep in mind, there are literally decades of research into the use of sliding window and adjustments to TCP/IP stacks around packet loss and the sliding window, so you may have something that works, but it may be difficult to replicate the same thing you get in a highly tweaked stack on a modern OS.
The real benefit to this method is you need reliable transport, but you really don't need ordering, because you're sending disjoint pieces of information. This is where TCP/IP breaks down, because a dropped packet stop all packets from making it up the stack until the packet is retransmitted.
You could have a simple Send->Ack protocol. For every packet you require an Ack before proceding with the next (Effectivly this is windows size = 1 packet - which I wouldn't call a sliding window :-)
You could have something like the following:
Part 1: Initialization
1) Sender sends a packet with the # of packets to be sent. This packet may require some kind of control bit to be set or be a different size so that the receiver can distinguish it from regular packets.
2) Receiver sends ACK to Sender.
3) Repeat steps 1-2 until ACK received by Sender.
Part 2: Send bulk of data
4) Sender then sends all packets with a sequence number attached to the front of the data portion.
5) Receiver receives all the packets and arranges them as specified by the sequence numbers. The receiver keeps a data structure to keep track of which sequence numbers have been received.
Part 3: Send missing data
6) After some timeout period has elapsed with no more packets received, the Receiver sends a message to the Sender requesting the missing packets.
7) Sender sends the missing packets to the Receiver.
8) Repeat steps 6-7 until Receiver has received all required packets.
9) Receiver sends a special "Done" packet to the Sender.
10) Sender sends ACK to Receiver.
11) Repeat steps 10-11 until ACK is received by Receiver.
Data Structure
There are a few ways the Receiver can keep track of the missing sequence numbers. The most straight-forward way would be to keep a boolean for each sequence number, however this will be extremely inefficient. You may want to keep a list of missing packet ranges. For example, if there are 100 total packets, you would start with a list with one element [(1-100)]. Each time a packet is received, simply increment the number on the front of that range. Let's say you receive packets 1-12 successfully, miss packets 13-14, received 15-44, miss 45-46, then you end up with something like: [(13-14), (45-46)]. It would be quite easy to put this data structure into a packet and send it off to the Sender. You could maybe make it even better by using a tree instead, not sure.
You might want to look at SCTP - it's reliable and message-oriented.
It might be a better approach to maintain local state in the UDP applications to check that necessary data has been transferred for confirming reliability -- rather than, trying to do complete packet level reliability and flow-control.
Trying to replicate reliability and flow-control mechanisms of TCP in a UDP path is not a good answer at all.

Resources