Why use a ring buffer with data from sensor - c

I have a sensor that sends data at 8Khz to a microcontroller. The microcontroller parse the data and send it to the high level controler at 1Khz. I have seen people using ring buffer to collect the data from the sensor to the microcontroler and then take out the data to send it when we need.
If I receive 8 data but I can only send 1, the 7 other data in the ring buffer are useless...
I am curious why the use of a ring buffer is necessary/better compare to just wait for a new data and send it to the higher level?
Thank you

If you are receiving the data at 8 KHz rate and forwarding that data at 1 KHz rate, it means you get some data every 125 microseconds and you are forwarding it every 1 milliseconds.
Obviously you will need something to store that data every 125 microseconds and send the accumulated data after every 1 ms. For that you need some buffer kind of mechanism to store it. Hope this explanation helped.

Related

How to calculate end-to-end delay in a multi-hop transmission in UnetStack

I have developed an energy aware routing protocol, now for performance evaluation I want to calculate end-to-end packet transmission delay when packets travel through a multi-hop link. I am unable to decide which timing information to consider whether to consider the simulation time available in log file (log-0.txt) or the modem's transmission time (txtime and rxtime). Please let me know the method to calculate end-to-end delay in UnetStack.
The simulation time (first column in the log files below, in milliseconds) is synchronized across all simulated nodes, and so you can use it to compute end-to-end delays if you log a START time at your source node, and END time at your destination node.
Example log file:
5673|INFO|org.arl.unet.sim.SimulationAgent/4#570:call|TxFrameNtf:INFORM[type:DATA txTime:2066947222]
6511|INFO|org.arl.unet.sim.SimulationAgent/3#567:call|TxFrameNtf:INFORM[type:DATA txTime:1157370743]
10919|INFO|org.arl.unet.sim.SimulationAgent/4#570:call|TxFrameNtf:INFORM[type:DATA txTime:2072193222
In this example, node 4 (SimulationAgent/4) transmits at time 5673. Node 3 (SimulationAgent/3) then transmits at time 6511. And so on...
The txTime and rxTime are in microseconds, but are local to each node. So they can be used to get time differences for events in the same node, but cannot directly be compared across nodes.

Geting rx/tx timestamps from TP-Link 722N

I am trying to measure distance between two computers who are connected with a wifi ad-hoc by using time of arrival to determine the distance.
I am using TP-Link 722N with Atheros AR9271 chipset and ath9k_htc Driver I am trying to get rx/tx timestamps from the wlan card, is there any way to get rx/tx timestamps to do the necessary calculations to get the distance between the computers?
Seems unrealistic since minimum packet size * rate >> transit time.

Data Logger PIC16F877A

I have to design a Data Logger programusing mikroC PRO to run on the EasyPIC5 board (with PIC 16F877A microcontroller). I also have to use a 2-line LCD for display.
Here is what i have been given:
The program will take measurements from Analogue Port AN0 at regular intervals, and save the raw data to
EEPROM. The user should be able to select any of 6 memory banks to store the results of a logging session,
and should be able to set the time interval between readings at 1 second, 2 seconds, 5 seconds or 10
seconds. The number of readings taken in each logging session should be set to 5, but should be alterable from a #define in the first few lines of the program. Another #define should be used to specify the total number of memory banks (set to 6).
Having quite a bit of trouble with this.
Any help would be appreciated.
EDIT
Up till now i am able to get the readings of AN0 and i write them to EEPROM.. BUT the my question which i stupidly missed to ask.. how would i set the the memory banks to 6 and how to save the logging session to any of the banks
for(i = 0; i < k; i++) // Fill data buffer
EEPROM_Write(0x00+i, i); // Write data to address 0x00
By changing the initial value of i and k you can determine where to store your data.

Socket : measure data transfer rate (in bytes / second) between 2 applications

I have an application that keeps emitting data to a second application (consumer application) using TCP socket. How can I calculate the total time needed from when the data is sent by the first application until the data is received by the second application? Both the applications are coded using C/C++.
My current approach is as follow (in pseudocode):
struct packet{
long sent_time;
char* data;
}
FIRST APP (EMITTER) :
packet p = new packet();
p.data = initialize data (either from file or hard coded)
p.sent_time = get current time (using gettimeofday function)
//send the packet struct (containing sent time and packet data)
send (sockfd, p, ...);
SECOND APP (CONSUMER)
packet p = new packet();
nbytes = recv (sockfd, p, .....); // get the packet struct (which contains the sent time and data)
receive_time = get current time
data transfer time = receive time - p.senttime (assume I have converted this to second)
data transfer rate = nbytes / data transfer time; // in bytes per second
However the problem with this is that the local clock time between the 2 applications (emitter and consumer) are not the same because they are both running on different computers, leading this result to a completely useless result.
Is there any other better way to do this in a proper way (programmatically), and to get as accurate data transfer rate as possible?
If your protocol allows it, you could send back an acknowledgementn from the server for the received packet. This is also a must if you want to be sure that the server received/processed the data.
If you have that, you can simply calculate the rate on the client. Just substract the RTT from the length of the send+ACK intervall and you'll have a quite accurate measurement.
Alternatively you can use a time syncronization tool like NTP to synchronize the clocks on the two servers.
First of all: Even if your times were in sync, you would be calculating latency, not throughput. On every network connection chances are, that there is more than one packet en route at a given point in time, rendering your single-packet approach useless for throughput measurement.
E.g. Compare the ping time from your mobile to a HTTP server with the max download speed - ping time will be tens of ms, packet size will be ca. 1.5KByte, which would result in a much lower max throughput than observerd when downloading.
If you want to measure real throughput, use a blocking socket on the sender side and send e.g. 1 million packets as fast as the system will allow you, on the receiving side measure time between arrival of first packet and arrival of last packet.
If OTOH you want to accurately measure latency, use
struct packet{
long sent_time;
long reflect_time;
char* data;
}
and have the server reflect the packet. On the client side check all three timestamps, then reverse roles to get a grip on asymetric latencies.
Edit: I meant: The reflect time will be the "other" clock, so when running the test back and forth you will be able to filter out the offset.

Is there an optimal byte size for sending data over a network?

I assume 100 bytes is too small and can slow down larger file transfers with all of the writes, but something like 1MB seems like it may be too much. Does anyone have any suggestions for an optimal chunk of bytes per write for sending data over a network?
To elaborate a bit more, I'm implementing something that sends data over a network connection and show the progress of that data being sent. I've noticed if I send large files at about 100 bytes each write, it is extremely slow but the progress bar works out very nicely. However, if I send at say 1M per write, it is much faster, but the progress bar doesn't work as nicely due to the larger chunks being sent.
No, there is no universal optimal byte size.
TCP packets are subject to fragmentation, and while it would be nice to assume that everything from here to your destination is true ethernet with huge packet sizes, the reality is even if you can get the packet sizes of all the individual networks one of your packets takes, each packet you send out may take a different path through the internet.
It's not a problem you can "solve" and there's no universal ideal size.
Feed the data to the OS and TCP/IP stack as quickly as you can, and it'll dynamically adapt the packet size to the network connections (you should see the code they use for this optimization - it's really, really interesting. At least on the better stacks.)
If you control all the networks and stacks being used and inbetween your clients/servers, though, then you can do some hand tuning. But generally even then you'd have to have a really good grasp of the network and the data your sending before I'd suggest you approach it.
-Adam
If you can, just let the IP stack handle it; most OSes have a lot of optimization already built in. Vista, for example, will dynamically alter various parameters to maximize throughput; second-guessing the algorithm is unlikely to be beneficial.
This is especially true in higher-order languages, far from the actual wire, like C#; there are enough layers between you and actual TCP/IP packets that I would expect your code to have relatively little impact on throughput.
At worst, test various message sizes in various situations for yourself; few solutions are one-size-fits-all.
If you are using TCP/IP over Ethernet, the maximum packet size is about 1500 bytes. If you try to send more than that at once, the data will be split up into multiple packets before being sent out on the wire. If the data in your application is already packetized, then you might want to choose a packet size of just under 1500 so that when you send a full packet, the underlying stack doesn't have to break it up. For example, if every send you do is 1600 bytes, the TCP stack will have to send out two packets for each send, with the second packet being mostly empty. This is rather inefficient.
Having said that, I don't know much of a visible impact on performance this will have.
Make a function named CalcChunkSize
Add some private variables to your class:
Private PreferredTransferDuration As Integer = 1800 ' milliseconds, the timespan the class will attempt to achieve for each chunk, to give responsive feedback on the progress bar.
Private ChunkSizeSampleInterval As Integer = 15 ' interval to update the chunk size, used in conjunction with AutoSetChunkSize.
Private ChunkSize As Integer = 16 * 1024 ' 16k by default
Private StartTime As DateTime
Private MaxRequestLength As Long = 4096 ' default, this is updated so that the transfer class knows how much the server will accept
Before every download of a chunk, check if its time to calculate new chunksize using the ChunkSizeSampleInterval
Dim currentIntervalMod As Integer = numIterations Mod Me.ChunkSizeSampleInterval
If currentIntervalMod = 0 Then
Me.StartTime = DateTime.Now
ElseIf currentIntervalMod = 1 Then
Me.CalcChunkSize()
End If
numIterations is set to 0 outside the download-loop and after every downloaded chunk set to numIterations += 1
Have the CalcChunkSize doing this:
Protected Sub CalcAndSetChunkSize()
' chunk size calculation is defined as follows
' * in the examples below, the preferred transfer time is 1500ms, taking one sample.
' *
' * Example 1 Example 2
' * Initial size = 16384 bytes (16k) 16384
' * Transfer time for 1 chunk = 800ms 2000 ms
' * Average throughput / ms = 16384b / 800ms = 20.48 b/ms 16384 / 2000 = 8.192 b/ms
' * How many bytes in 1500ms? = 20.48 * 1500 = 30720 bytes 8.192 * 1500 = 12228 bytes
' * New chunksize = 30720 bytes (speed up) 12228 bytes (slow down from original chunk size)
'
Dim transferTime As Double = DateTime.Now.Subtract(Me.StartTime).TotalMilliseconds
Dim averageBytesPerMilliSec As Double = Me.ChunkSize / transferTime
Dim preferredChunkSize As Double = averageBytesPerMilliSec * Me.PreferredTransferDuration
Me.ChunkSize = CInt(Math.Min(Me.MaxRequestLength, Math.Max(4 * 1024, preferredChunkSize)))
' set the chunk size so that it takes 1500ms per chunk (estimate), not less than 4Kb and not greater than 4mb // (note 4096Kb sometimes causes problems, probably due to the IIS max request size limit, choosing a slightly smaller max size of 4 million bytes seems to work nicely)
End Sub
Then just use the ChunkSize when requesting next chunk.
I found this in the "Sending files in chunks with MTOM web services and .Net 2.0" by Tim_mackey and have found it very useful myself to dynamically calculate most effective chunksize.
The source code in whole are here: http://www.codeproject.com/KB/XML/MTOMWebServices.aspx
And author here: http://www.codeproject.com/script/Membership/Profiles.aspx?mid=321767
I believe your problem is that you use blocking sockets and not non-blocking ones.
When you use blocking sockets and you send 1M of data the network stack can wait for all of the data to be placed in a buffer, if the buffers are full you'll be blocked and your progress bar will wait for the whole 1M to be accepted into the buffers, this may take a while and your progress bar will be jumpy.
If however you use non-blocking sockets, whatever buffer size you use will not block, and you will need to do the waiting yourself with select/poll/epoll/whatever-works-on-your-platform (select is the most portable though). In this way your progress bar will be updated fast and reflect the most accurate information.
Do note that at the sender the progress bar is partially broken any way since the kernel will buffer some data and you will reach 100% before the other side really received the data. The only way around this is if your protocol includes a reply on the amount of data received by the receiver.
As others said, second guessing the OS and the network is mostly futile, if you keep using blocking sockets pick a size that is large enough to include more data than a single packet so that you don't send too little data in a packet as this will reduce your throughput needlessly. I'd go with something like 4K to include at least two packets at a time.
The one thing I will add is that for a given ethernet connection it takes about as long to send a small packet as a large one. As other's have said: if you're just sending a stream of data let the system handle it. But if you're worried about individual short messages back and forth, a typical ethernet packet is about 1500 bytes- as long as you keep it under that you should be good.
You'd need to use Path MTU Discovery, or use a good default value (ie less than 1500 bytes).
One empirical test you can do, if you haven't already, is of course to use a sniffer (tcpdump, Wireshark etc.) and look at what packet sizes are achieved when using other software for up/downloading. That might give you a hint.
Here is the formula you need:
int optimalChunkSize = totalDataSize / progressBar1.Width;
Using this, each chunk you send will increment the progress bar by 1 pixel. A smaller chunk size than this is pointless, in terms of user feedback.

Resources