I am trying to measure distance between two computers who are connected with a wifi ad-hoc by using time of arrival to determine the distance.
I am using TP-Link 722N with Atheros AR9271 chipset and ath9k_htc Driver I am trying to get rx/tx timestamps from the wlan card, is there any way to get rx/tx timestamps to do the necessary calculations to get the distance between the computers?
Seems unrealistic since minimum packet size * rate >> transit time.
Related
I am working on the ESP32 microcontroller and I would like to implement iBeacon advertising feature. I have been reading about the iBeacon. I have learnt about the specific format that iBeacon packet uses:
https://os.mbed.com/blog/entry/BLE-Beacons-URIBeacon-AltBeacons-iBeacon/
From what I understand, iBeacon preset is set and not meant to be modified. I must set a custom UUID, major and minor numbers such as:
uint8_t Beacon_UUID[16] = {0x00,0x11,0x22,0x33,0x44,0x55,0x66,0x77,0x88,0x99,0xAA,0xBB,0xCC,0xDD,0xEE,0xFF};
uint8_t Beacon_MAJOR[2] = {0x12,0x34};
uint8_t Beacon_MINOR[2] = {0x56,0x78};
The only thing that I am confused about is the TX Power byte. What should I set it to?
According to the website that I have referred above:
Blockquote
A scanning application reads the UUID, major number and minor number and references them against a database to get information about the beacon; the beacon itself carries no descriptive information - it requires this external database to be useful. The TX power field is used with the measured signal strength to determine how far away the beacon is from the smart phone. Please note that TxPower must be calibrated on a beacon-by-beacon basis by the user to be accurate.
Blockquote
It mentions what is TxPower and how it should be determined but I still cannot make any sense out of it. Why would I need to measure how far away the beacon is from the smart phone if? That should be done by the iBeacon scanner not the advertiser(me).
When you are making a hardware device transmit iBeacon, it is your responsibility to measure the output power of the transmitter and put the corresponding value into the TxPower byte of the iBeacon transmission.
Why? Because receiving applications that detect your beacon need to know how strong your transmitter is to estimate distance. Otherwise there would be no way for the receiving application to tell if a medium signal level like -75 dB is from a nearby weak transmitter or a far away strong transmitter.
The basic procedure is to put a receiver exactly one meter away from your transmitter and measure the RSSI at that distance. The one meter RSSI is what you put into TxPower byte of the iBeacon advertisement.
The specifics of how to measure this properly can be a bit tricky, because every receiver has a different "specificity" meaning they will read a higher or lower RSSI depending on their antenna gain. When Apple came out with iBeacon several years ago, they declared the reference receiver an iPhone 4S -- this was the newest phone available at that time. You would run beacon detection app like AirLocate (not available in the App Store) or my Beacon Locate (available in the App Store). The basic procedure is to aim the back of the phone at the beacon when it is exactly one meter away and use the app to measure the RSSI. Many detector apps have a "calibrate" feature which averages RSSI measurements over 30 seconds or so. For best results when calibrating, do this with both transmitter and receiver at least 3 feet above the ground and minimize metal or dense walls nearby. Ideally, you would do this outdoors using two plastic tripods (or do the same inside an antenna chamber.)
It is hard to find a reference iPhone 4S these days, and other iPhone models can measure surprisingly different RSSI values. My tests show that an iPhone SE 2nd edition measures signals very similarly to an iPhone 4S. But even these models are not made anymore. If you cannot get one of these, use the oldest iPhone you can get without a case and take the best measurement you can as described above. Obviously a ideal measurement requires more effort -- you have to decide how much effort you are willing to put into this. An ideal measurement is only important if you expect receiving applications to want to get the best distance measurements possible.
I have a sensor that sends data at 8Khz to a microcontroller. The microcontroller parse the data and send it to the high level controler at 1Khz. I have seen people using ring buffer to collect the data from the sensor to the microcontroler and then take out the data to send it when we need.
If I receive 8 data but I can only send 1, the 7 other data in the ring buffer are useless...
I am curious why the use of a ring buffer is necessary/better compare to just wait for a new data and send it to the higher level?
Thank you
If you are receiving the data at 8 KHz rate and forwarding that data at 1 KHz rate, it means you get some data every 125 microseconds and you are forwarding it every 1 milliseconds.
Obviously you will need something to store that data every 125 microseconds and send the accumulated data after every 1 ms. For that you need some buffer kind of mechanism to store it. Hope this explanation helped.
I have developed an energy aware routing protocol, now for performance evaluation I want to calculate end-to-end packet transmission delay when packets travel through a multi-hop link. I am unable to decide which timing information to consider whether to consider the simulation time available in log file (log-0.txt) or the modem's transmission time (txtime and rxtime). Please let me know the method to calculate end-to-end delay in UnetStack.
The simulation time (first column in the log files below, in milliseconds) is synchronized across all simulated nodes, and so you can use it to compute end-to-end delays if you log a START time at your source node, and END time at your destination node.
Example log file:
5673|INFO|org.arl.unet.sim.SimulationAgent/4#570:call|TxFrameNtf:INFORM[type:DATA txTime:2066947222]
6511|INFO|org.arl.unet.sim.SimulationAgent/3#567:call|TxFrameNtf:INFORM[type:DATA txTime:1157370743]
10919|INFO|org.arl.unet.sim.SimulationAgent/4#570:call|TxFrameNtf:INFORM[type:DATA txTime:2072193222
In this example, node 4 (SimulationAgent/4) transmits at time 5673. Node 3 (SimulationAgent/3) then transmits at time 6511. And so on...
The txTime and rxTime are in microseconds, but are local to each node. So they can be used to get time differences for events in the same node, but cannot directly be compared across nodes.
I am making a finger plethysmograph(FP) using an LED and a receiver. The sensor produces an analog pulse waveform that is filtered, amplified and fed into a microcontroller input with a range of 3.3-0V. This signal is converted into its digital form.
Smapling rate is 8MHz, Processor frequency is 26MHz, Precision is 10 or 8 bit.
I am having problems coming up with a robust method for peak detection. I want to be able to detect heart pulses from the finger plethysmograph. I have managed to produce an accurate measurement of heart rate using a threshold method. However, the FP is extremely sensitive to movement and the offset of the signal can change based on movement. However, the peaks of the signal will still show up but with varying voltage offset.
Therefore, I am proposing a peak detection method that uses the slope to detect peaks. In example, if a peak is produced, the slope before and after the maximum point will be positive and negative respectively.
How feasible do you think this method is? Is there an easier way to perform peak detection using a microcontroller?
You can still introduce detection of false peaks when the device is moved. This will be present whether you are timing average peak duration or applying an FFT (fast Fourier Transform).
With an FFT you should be able to ignore peaks outside the range of frequencies you are considering (ie those < 30 bpm and > 300 bpm, say).
As Kenny suggests, 8MHz might overwhelm a 26MHz chip. Any particular reason for such a high sampling rate?
Like some of the comments, I would also recommend lowering your sample rate since you only care about pulse (i.e. heart rate) for now. So, assuming you're going to be looking at resting heart rate, you'll be in the sub-1Hz to 2Hz range (60 BPM = 1Hz), depending on subject health, age, etc.
In order to isolate the frequency range of interest, I would also recommend a simple, low-order digital filter. If you have access to Matlab, you can play around with Digital Filter Design using its Filter Design and Analysis Tool (Introduction to the FDATool). As you'll find out, Digital Filtering (wiki) is not computationally expensive since it is a matter of multiplication and addition.
To answer the detection part of your question, YES, it is certainly feasible to implement peak detection on the plethysmograph waveform within a microcontroller. Taking your example, a slope-based peak detection algorithm would operate on your waveform data, searching for changes in slope, essentially where the slope waveform crosses zero.
Here are a few other things to consider about your application:
Calculating slope can have a "spread" (i.e. do you find the slope between adjacent samples, or samples which are a few samples apart?)
What if your peak detection algorithm locates peaks that are too close together, or too far apart, in a physiological sense?
A Pulse Oximeter (wiki) often utilizes LEDs which emit Red and Infrared light. How does the frequency of the LED affect the plethysmograph? (HINT: It may not be significant, but I believe you'll find one wavelength to yield greater amplitudes in your frequency range of interest.)
Of course you'll find a variety of potential algorithms if you do a literature search but I think slope-based detection is great for its simplicity. Hope it helps.
If you can detect the period using zero crossing, even at 10x oversampling of 10 Hz, you can use a line fit of the quick-n-dirty-edge to find the exact period, and then subtract the new wave's samples in that period with the previous, and get a DC offset. The period measurement will have the precision of your sample rate. Doing operations on the time and amplitude-normalized data will be much easier.
This idea is computationally light compared to FFT, which still needs additional data processing.
I have an application that keeps emitting data to a second application (consumer application) using TCP socket. How can I calculate the total time needed from when the data is sent by the first application until the data is received by the second application? Both the applications are coded using C/C++.
My current approach is as follow (in pseudocode):
struct packet{
long sent_time;
char* data;
}
FIRST APP (EMITTER) :
packet p = new packet();
p.data = initialize data (either from file or hard coded)
p.sent_time = get current time (using gettimeofday function)
//send the packet struct (containing sent time and packet data)
send (sockfd, p, ...);
SECOND APP (CONSUMER)
packet p = new packet();
nbytes = recv (sockfd, p, .....); // get the packet struct (which contains the sent time and data)
receive_time = get current time
data transfer time = receive time - p.senttime (assume I have converted this to second)
data transfer rate = nbytes / data transfer time; // in bytes per second
However the problem with this is that the local clock time between the 2 applications (emitter and consumer) are not the same because they are both running on different computers, leading this result to a completely useless result.
Is there any other better way to do this in a proper way (programmatically), and to get as accurate data transfer rate as possible?
If your protocol allows it, you could send back an acknowledgementn from the server for the received packet. This is also a must if you want to be sure that the server received/processed the data.
If you have that, you can simply calculate the rate on the client. Just substract the RTT from the length of the send+ACK intervall and you'll have a quite accurate measurement.
Alternatively you can use a time syncronization tool like NTP to synchronize the clocks on the two servers.
First of all: Even if your times were in sync, you would be calculating latency, not throughput. On every network connection chances are, that there is more than one packet en route at a given point in time, rendering your single-packet approach useless for throughput measurement.
E.g. Compare the ping time from your mobile to a HTTP server with the max download speed - ping time will be tens of ms, packet size will be ca. 1.5KByte, which would result in a much lower max throughput than observerd when downloading.
If you want to measure real throughput, use a blocking socket on the sender side and send e.g. 1 million packets as fast as the system will allow you, on the receiving side measure time between arrival of first packet and arrival of last packet.
If OTOH you want to accurately measure latency, use
struct packet{
long sent_time;
long reflect_time;
char* data;
}
and have the server reflect the packet. On the client side check all three timestamps, then reverse roles to get a grip on asymetric latencies.
Edit: I meant: The reflect time will be the "other" clock, so when running the test back and forth you will be able to filter out the offset.