Flow Control Algorithm selection - c

I need to select an algorithm to implement the following scenario in C.
Scenario : Sender will send same sized data packets at random intervals to
receiver. Receive can process data slower than the inqueue rate even though sender sends each packet at random interval.
1.How can I avoid sender overrun & process all the data in receiver side
without missing the data from sender eventhough it is fast ?
Which algorithm will be useful for this case (leaky bucket / Circular Queue / any other) ?? Please suggest

Related

Akka HTTP streaming API with cycles never completes

I'm building an application where I take a request from a user, call a REST API to get back some data, then based on that response, make another HTTP call and so on. Basically, I'm processing a tree of data where each node in the tree requires me to recursively call this API, like this:
A
/ \
B C
/ \ \
D E F
I'm using Akka HTTP with Akka Streams to build the application, so I'm using it's streaming API, like this:
val httpFlow = Http().cachedConnection(host = "localhost")
val flow = GraphDSL.create() { implicit builder =>
import GraphDSL.Implicits._
val merge = b.add(Merge[Data](2))
val bcast = b.add(Broadcast[ResponseData](2))
takeUserData ~> merge ~> createRequest ~> httpFlow ~> processResponse ~> bcast
merge <~ extractSubtree <~ bcast
FlowShape(takeUserData.in, bcast.out(1))
}
I understand that the best way to handle recursion in an Akka Streams application is to handle recursion outside of the stream, but since I'm recursively calling the HTTP flow to get each subtree of data, I wanted to make sure that the flow was properly backpressured in case the API becomes overloaded.
The problem is that this stream never completes. If I hook it up to a simple source like this:
val source = Source.single(data)
val sink = Sink.seq[ResponseData]
source.via(flow).runWith(sink)
It prints out that it's processing all the data in the tree and then stops printing anything, just idling forever.
I read the documentation about cycles and the suggestion was to put a MergePreferred in there, but that didn't seem to help. This question helped, but I don't understand why MergePreferred wouldn't stop the deadlock, since unlike their example, the elements are removed from the stream at each level of the tree.
Why doesn't MergePreferred avoid the deadlock, and is there another way of doing this?
MergePreferred (in the absence of eagerComplete being true) will complete when all the inputs have completed, which tends to generally be true of stages in Akka Streams (completion flows down from the start).
So that implies that the merge can't propagate completion until both the input and extractSubtree signal completion. extractSubtree won't signal completion (most likely, without knowing the stages in that flow) until bcast signals completion which (again most likely) won't happen until processResponse signals completion which* won't happen until httpFlow signals completion which* won't happen until createRequest signals completion, which* won't happen until merge signals completion. Because detecting this cycle in general is impossible (consider that there are stages for which completion is entirely dynamic), Akka Streams effectively takes the position that if you want to create a cycle like this, it's on you to determine how to break the cycle.
As you've noticed, eagerComplete being true changes this behavior, but since it will complete as soon as any input completes (which in this case will always be the input, thanks to the cycle) merge completes and cancels demand on extractSubtree (which by itself could (depending on whether the Broadcast has eagerCancel set) cause the downstream to cancel), which will likely result in at least some elements emitted by extractSubtree not getting processed.
If you're absolutely sure that the input completing means that the cycle will eventually dry up, you can use eagerComplete = false if you have some means to complete extractSubtree once the cycle is dry and the input has completed. A broad outline (without knowing what, specifically, is in extractSubtree) for going about this:
map everything coming into extractSubtree from bcast into a Some of the input
prematerialize a Source.actorRef to which you can send a None, save the ActorRef (which will be the materialized value of this source)
merge the input with that prematerialized source
when extracting the subtree, use a statefulMapConcat stage to track whether a) a None has been seen and b) how many subtrees are pending (initial value 1, add the number of (first generation) children of this node minus 1, i.e. no children subtracts 1); if a None has been seen and no subtrees are pending emit a List(None), otherwise emit a List of each subtree wrapped in a Some
have a takeWhile(_.isDefined), which will complete once it sees a None
if you have more complex things (e.g. side effects) in extractSubtrees, you'll have to figure out where to put them
before merging the outside input, pass it through a watchTermination stage, and in the future callback (on success) send a None to the ActorRef you got when prematerializing the Source.actorRef for extractSubtrees. Thus, when the input completes, watchTermination will fire successfully and effectively send a message to extractSubtrees to watch for when it's completed the inflight tree.

How can I get current microphone input level with C WinAPI?

Using Windows API, I want to implement something like following:
i.e. Getting current microphone input level.
I am not allowed to use external audio libraries, but I can use Windows libraries. So I tried using waveIn functions, but I do not know how to process audio input data in real time.
This is the method I am currently using:
Record for 100 milliseconds
Select highest value from the recorded data buffer
Repeat forever
But I think this is way too hacky, and not a recommended way. How can I do this properly?
Having built a tuning wizard for a very dated, but well known, A/V conferencing applicaiton, what you describe is nearly identical to what I did.
A few considerations:
Enqueue 5 to 10 of those 100ms buffers into the audio device via waveInAddBuffer. IIRC, when the waveIn queue goes empty, weird things happen. Then as the waveInProc callbacks occurs, search for the sample with the highest absolute value in the completed buffer as you describe. Then plot that onto your visualization. Requeue the completed buffers.
It might seem obvious to map the sample value as follows onto your visualization linearly.
For example, to plot a 16-bit sample
// convert sample magnitude from 0..32768 to 0..N
length = (sample * N) / 32768;
DrawLine(length);
But then when you speak into the microphone, that visualization won't seem as "active" or "vibrant".
But a better approach would be to give more strength to those lower energy samples. Easy way to do this is to replot along the μ-law curve (or use a table lookup).
length = (sample * N) / 32768;
length = log(1+length)/log(N);
length = max(length,N)
DrawLine(length);
You can tweak the above approach to whatever looks good.
Instead of computing the values yourself, you can rely on values from Windows. This is actually the values displayed in your screenshot from the Windows Settings.
See the following sample for the IAudioMeterInformation interface:
https://learn.microsoft.com/en-us/windows/win32/coreaudio/peak-meters.
It is made for the playback but you can use it for capture also.
Some remarks, if you open the IAudioMeterInformation for a microphone but no application opened a stream from this microphone, then the level will be 0.
It means that while you want to display your microphone peak meter, you will need to open a microphone stream, like you already did.
Also read the documentation about IAudioMeterInformation it may not be what you need as it is the peak value. It depends on what you want to do with it.

what is an appropriate way to detect errors in CAN-Bus?

I am communicating with a battery that sends data with CAN Bus protocol (J1939). I use PIC 18F26K83. My goal is to display remaining state of charge on a display. For example I have value %99 in 60 of the arrays and %1 in 40 of them then I will display %99). However, it does not look like a reliable solution because I do not know how many garbage datas I receive. Please note that I cannot use some error detection algorithms such as check sum because I have no access to microcontrollers in batter, I can only use receiver side (display).
Edit: I am aware of CRC in CAN Bus but it seems like some times it does not work since sometimes I get garbage.
Yes, you can use CRC calculation, because CRC is calculated also at receiver side by Communication Controllers. Thats how a CRC error is detected, for instance. To elaborate:
battery sends complete message; message gets interference on phys layer; receiver (your PIC) receives message and calculates the CRC on it; does not match with the CRC tag included in message;
PIC CC will have a REC error +1, and will not ACK the message to the Battery.
You will detect every type of CAN message error on receiver side, except bit Error, which is also irrelevant since it will result in incomplete message.
Basically, you shouldn't be able to rely on the received CAN message content to deduce Battery level, if the message is corrupted (garbage). It is simply discarded before arrives to the Application layer of your PIC.

Timer in Selective Repeat ARQ

My professor has given me an assignment to implement the Selective Repeat ARQ algorithm in C for packet transaction between sender and receiver.
There is a timer associated with each packet to be sent at the sender which is triggered ON when that packet is sent, according to which it is decided which packet duplicate is needed to be sent.
But I don't know how to set the timer of each packet.
Please suggest some method for it.
Thanks is advance!!
Keep a data structure (e.g. a priority queue or ordered map or some such) that contains each packet you're planning to (re)send, along with the time at which you're intending to (re)send it. Ideally this data structure will be such that it is efficient to determine the smallest timestamp currently in the data structure, but if the number of scheduled packets will be relatively small, a simpler unordered data structure like a linked list could work too.
On each iteration of your event loop, determine the smallest timestamp value in the data structure. Subtract the current time from that timestamp value to get a delay time (in milliseconds or microseconds or similar).
If you're using select() or similar, you can pass that delay time as your timeout argument. If you're doing something simpler without multiplexing, you might be able to get away with passing the delay time to usleep() or similar instead.
After select() (or usleep()) returns, check the current time again. If the current time is now greater than or equal to your target time, you can send the packet with the smallest timestamp, and then remove it from your data structure. (If you think you might want to resend it again later, you can re-insert it into the data structure with a new/updated timestamp value)
You can also use Threads for that purpose, which is quite easy and needs fewer lines of codes.
You just need to create and define this function:
unsigned long CALLBACK packetTimer(void *pn){
//This is our Packet Timer
//It's going to run on a Tread
//If ack[thisPacketNumber] is not true
//We gonna check will check for packet time
//If it's reached to its limit we gonna send this packet again
//and reset the timer
//If ack[] becomes true
//break the while loop and this will end this tread
int pno = (int)pn;
std::clock_t start;
double duration;
start = std::clock();
while(1){
if(!ack[pno]){
duration = ( std::clock() - start ) / (double) CLOCKS_PER_SEC;
if(duration > 0.5){
//This tells that we haven't received our ACk yet for this packet
//So send it again
printf("SendBuffer for Packet %d: %s", pno, packets[pno]->data);
//Resending packet again
send_unreliably(s,packets[pno]->data,(result->ai_addr));
//Reseting the timer
start = std::clock();
}
}else{break;}
}
}
And inside your while loop, where you send and receive packets to receiver, you simply define:
unsigned long tid;//This should be outside the while loop,
//Ideally in the beginning of main function
CreateThread(NULL,0,packetTimer,(void *)packetNumber,0,&tid);
This implementaion is for windows, for UNIX we need to use pthread()
This is it.
And don't forgot to add required header files like:
#include <stdlib.h>
#include <cstdio>
#include <ctime>

Only one write() call sends data over socket connection

First stackoverflow question! I've searched...I promise. I haven't found any answers to my predicament. I have...a severely aggravating problem to say the least. To make a very long story short, I am developing the infrastructure for a game where mobile applications (an Android app and an iOS app) communicate with a server using sockets to send data to a database. The back end server script (which I call BES, or Back End Server), is several thousand lines of code long. Essentially, it has a main method that accepts incoming connections to a socket and forks them off, and a method that reads the input from the socket and determines what to do with it. Most of the code lies in the methods that send and receive data from the database and sends it back to the mobile apps. All of them work fine, except for the newest method I have added. This method grabs a large amount of data from the database, encodes it as a JSON object, and sends it back to the mobile app, which also decodes it from the JSON object and does what it needs to do. My problem is that this data is very large, and most of the time does not make it across the socket in one data write. Thus, I added one additional data write into the socket that informs the app of the size of the JSON object it is about to receive. However, after this write happens, the next write sends empty data to the mobile app.
The odd thing is, when I remove this first write that sends the size of the JSON object, the actual sending of the JSON object works fine. It's just very unreliable and I have to hope that it sends it all in one read. To add more oddity to the situation, when I make the size of the data that the second write sends a huge number, the iOS app will read it properly, but it will have the data in the middle of an otherwise empty array.
What in the world is going on? Any insight is greatly appreciated! Below is just a basic snippet of my two write commands on the server side.
Keep in mind that EVERYWHERE else in this script the read's and write's work fine, but this is the only place where I do 2 write operations back to back.
The server script is on a Ubuntu server in native C using Berkeley sockets, and the iOS is using a wrapper class called AsyncSocket.
int n;
//outputMessage contains a string that tells the mobile app how long the next message
//(returnData) will be
n = write(sock, outputMessage, sizeof(outputMessage));
if(n < 0)
//error handling is here
//returnData is a JSON encoded string (well, char[] to be exact, this is native-C)
n = write(sock, returnData, sizeof(returnData));
if(n < 0)
//error handling is here
The mobile app makes two read calls, and gets outputMessage just fine, but returnData is always just a bunch of empty data, unless I overwrite sizeof(returnData) to some hugely large number, in which case, the iOS will receive the data in the middle of an otherwise empty data object (NSData object, to be exact). It may also be important to note that the method I use on the iOS side in my AsyncSocket class reads data up to the length that it receives from the first write call. So if I tell it to read, say 10000 bytes, it will create an NSData object of that size and use it as the buffer when reading from the socket.
Any help is greatly, GREATLY appreciated. Thanks in advance everyone!
It's just very unreliable and I have to hope that it sends it all in one read.
The key to successful programming with TCP is that there is no concept of a TCP "packet" or "block" of data at the application level. The application only sees a stream of bytes, with no boundaries. When you call write() on the sending end with some data, the TCP layer may choose to slice and dice your data in any way it sees fit, including coalescing multiple blocks together.
You might write 10 bytes two times and read 5 then 15 bytes. Or maybe your receiver will see 20 bytes all at once. What you cannot do is just "hope" that some chunks of bytes you send will arrive at the other end in the same chunks.
What might be happening in your specific situation is that the two back-to-back writes are being coalesced into one, and your reading logic simply can't handle that.
Thanks for all of the feedback! I incorporated everyone's answers into the solution. I created a method that writes to the socket an iovec struct using writev instead of write. The wrapper class I'm using on the iOS side, AsyncSocket (which is fantastic, by the way...check it out here -->AsyncSocket Google Code Repo ) handles receiving an iovec just fine, and behind the scenes apparently, as it does not require any additional effort on my part for it to read all of the data correctly. The AsyncSocket class does not call my delegate method didReadData now until it receives all of the data specified in the iovec struct.
Again, thank you all! This helped greatly. Literally overnight I got responses for an issue I've been up against for a week now. I look forward to becoming more involved in the stackoverflow community!
Sample code for solution:
//returnData is the JSON encoded string I am returning
//sock is my predefined socket descriptor
struct iovec iov[1];
int iovcnt = 0;
iov[0].iov_base = returnData;
iov[0].iov_len = strlen(returnData);
iovcnt = sizeof(iov) / sizeof(struct iovec);
n = writev(sock, iov, iovcnt)
if(n < 0)
//error handling here
while(n < iovcnt)
//rebuild iovec struct with remaining data from returnData (from position n to the end of the string)
You should really define a function write_complete that completely writes a buffer to a socket. Check the return value of write, it might also be a positive number, but smaller than the size of the buffer. In that case you need to write the remaining part of the buffer again.
Oh, and using sizeof is error-prone, too. In the above write_complete function you should therefore print the given size and compare it to what you expect.
Ideally on the server you want to write the header (the size) and the data atomically, I'd do that using the scatter/gather calls writev() also if there is any chance multiple threads can write to the same socket concurrently you may want to use a mutex in the write call.
writev() will also write all the data before returning (if you are using blocking I/O).
On the client you may have to have a state machine that reads the length of the buffer then sits in a loop reading until all the data has been received, as large buffers will be fragmented and come in in various sized blocks.

Resources