I am working on the ESP32 microcontroller and I would like to implement iBeacon advertising feature. I have been reading about the iBeacon. I have learnt about the specific format that iBeacon packet uses:
https://os.mbed.com/blog/entry/BLE-Beacons-URIBeacon-AltBeacons-iBeacon/
From what I understand, iBeacon preset is set and not meant to be modified. I must set a custom UUID, major and minor numbers such as:
uint8_t Beacon_UUID[16] = {0x00,0x11,0x22,0x33,0x44,0x55,0x66,0x77,0x88,0x99,0xAA,0xBB,0xCC,0xDD,0xEE,0xFF};
uint8_t Beacon_MAJOR[2] = {0x12,0x34};
uint8_t Beacon_MINOR[2] = {0x56,0x78};
The only thing that I am confused about is the TX Power byte. What should I set it to?
According to the website that I have referred above:
Blockquote
A scanning application reads the UUID, major number and minor number and references them against a database to get information about the beacon; the beacon itself carries no descriptive information - it requires this external database to be useful. The TX power field is used with the measured signal strength to determine how far away the beacon is from the smart phone. Please note that TxPower must be calibrated on a beacon-by-beacon basis by the user to be accurate.
Blockquote
It mentions what is TxPower and how it should be determined but I still cannot make any sense out of it. Why would I need to measure how far away the beacon is from the smart phone if? That should be done by the iBeacon scanner not the advertiser(me).
When you are making a hardware device transmit iBeacon, it is your responsibility to measure the output power of the transmitter and put the corresponding value into the TxPower byte of the iBeacon transmission.
Why? Because receiving applications that detect your beacon need to know how strong your transmitter is to estimate distance. Otherwise there would be no way for the receiving application to tell if a medium signal level like -75 dB is from a nearby weak transmitter or a far away strong transmitter.
The basic procedure is to put a receiver exactly one meter away from your transmitter and measure the RSSI at that distance. The one meter RSSI is what you put into TxPower byte of the iBeacon advertisement.
The specifics of how to measure this properly can be a bit tricky, because every receiver has a different "specificity" meaning they will read a higher or lower RSSI depending on their antenna gain. When Apple came out with iBeacon several years ago, they declared the reference receiver an iPhone 4S -- this was the newest phone available at that time. You would run beacon detection app like AirLocate (not available in the App Store) or my Beacon Locate (available in the App Store). The basic procedure is to aim the back of the phone at the beacon when it is exactly one meter away and use the app to measure the RSSI. Many detector apps have a "calibrate" feature which averages RSSI measurements over 30 seconds or so. For best results when calibrating, do this with both transmitter and receiver at least 3 feet above the ground and minimize metal or dense walls nearby. Ideally, you would do this outdoors using two plastic tripods (or do the same inside an antenna chamber.)
It is hard to find a reference iPhone 4S these days, and other iPhone models can measure surprisingly different RSSI values. My tests show that an iPhone SE 2nd edition measures signals very similarly to an iPhone 4S. But even these models are not made anymore. If you cannot get one of these, use the oldest iPhone you can get without a case and take the best measurement you can as described above. Obviously a ideal measurement requires more effort -- you have to decide how much effort you are willing to put into this. An ideal measurement is only important if you expect receiving applications to want to get the best distance measurements possible.
Related
I want to measure the amount of time taken to do any kind of operation inside apps such as Creo Parametric 5.0, Adobe Premiere Pro, Maya, Adobe Creative, Lightroom CC or any other design app
The idea is to measure the performance (time taken per operation) to catch performance issues.
When you create your library of action, you can create a decorator that log and time any actions so you can monitor whats going on
A method which is hacky and time consuming but generic (even for softwares not providing (good) scripting), could be to record your screen at high speed (such as 60 fps). Then you look at the frames to count the frames between giving the order (click, enter key) and the result (updated display).
The precision will be in the order of 1 / recording frequency (16 ms if recording at 60 fps). A drawback is that you are likely to measure the time of more than just the operation you are interested in, for instance if you want to bench the loading of a file, you will also measure the time it took to render it after (which may/should be negligible).
I was able to apply this method using https://github.com/SerpentAI/D3DShot (increase the framebuffer size which by default last 1 second). Note that the frame numbers when exporting to files are going backward in time.
It may be possible to make this method less hacky by using computer vision algorithms to not have to count frames manually.
I'm refering mostly to this paper here: http://clgiles.ist.psu.edu/papers/UMD-CS-TR-3617.what.size.neural.net.to.use.pdf
Current Setup:
I'm currently trying to port the neural-genetic AI solution that I have laying around to get into a multi-purpose multi-agent tool. So, for example, it should work as an AI in a game engine for moving around entities and let 'em shoot and destroy the enemy (so e.g. 4 inputs like distance x,y and angle x,y and 2 outputs like accelerate left,right).
The state so far is that I'm using the same amount of genomes as there are agents to determine the fittest agents. 20% of the fittest agents are combined with each other (zz, zw genomes selected) and create 2 babies for the new population each. The rest of the new population per-new-generation is selected randomly across the old population, including the fittest with-an-unfit-genome.
That works pretty well to prime the AI, after generation 50-100 it is pretty much human-unbeatable in a Breakout clone and a little Tank game where you can shoot and move around.
As I had the idea to use on evolution population for each "type of Agent" the question is now if it is possible to determine the amount of hidden layers and the amount of neurons in the hidden layers generically.
My setup for the tank game is 4 inputs, 3 outputs and 1 hidden layer with 12 neurons that worked the best (around 50 generations to be really strong).
My setup for a breakout game is 6 inputs, 2 outputs and 2 hidden layers with 12 neurons that seems to work best.
Done Research:
So, back to the paper: On page 32 you can see that it seems that more neurons per hidden layer need of course more time for priming, but the more neurons are in between, the more are the chances to get into the function without noise.
I currently prime my AI only using the fitness increase on successfully being better than the last try.
So in a tank game it means he successfully shot the other tank (wounded him 4 times is better, then enemy is dead) and won the round.
In the breakout game it's similar as I have a paddle that the AI can move around and it can collect points. "Getting shot" or negative treatment here is that it forgot to catch the ball. So potential noise input would be 2 output values (move-left, move-right) that depend on 4 input values (ball x, y, degx, degy).
Questions:
So, what kind of calculation for the amount of hidden layers and amount of neurons do you think can be a good tradeoff to have no noise that kills the genome evolution?
What is the minimum amount of agents until you can say that "it evolves further"? My current training setup is always around having 50 agents that learn in parallel (so they basically simulate 50 games in parallel "behind the scenes").
In sum, for most problems, one could probably get decent performance (even without a second optimization step) by setting the hidden layer configuration using just two rules: (i) number of hidden layers equals one; and (ii) the number of neurons in that layer is the mean of the neurons in the input and output layers.
-doug
In short. It's an ongoing area of research. Most (All that I know of) ANN using numerous neurons and H-Layers don't set a static number of either, instead they use algorithms to continuously modify these values. Usually constructing and destroying when outputs converge/diverge.
Since it sounds like you're already using some evolutionary computing, consider looking into Andrew Turner's work on CGPANN, I remember it getting pretty decent improvements on benchmarks similar to your work.
I am making a finger plethysmograph(FP) using an LED and a receiver. The sensor produces an analog pulse waveform that is filtered, amplified and fed into a microcontroller input with a range of 3.3-0V. This signal is converted into its digital form.
Smapling rate is 8MHz, Processor frequency is 26MHz, Precision is 10 or 8 bit.
I am having problems coming up with a robust method for peak detection. I want to be able to detect heart pulses from the finger plethysmograph. I have managed to produce an accurate measurement of heart rate using a threshold method. However, the FP is extremely sensitive to movement and the offset of the signal can change based on movement. However, the peaks of the signal will still show up but with varying voltage offset.
Therefore, I am proposing a peak detection method that uses the slope to detect peaks. In example, if a peak is produced, the slope before and after the maximum point will be positive and negative respectively.
How feasible do you think this method is? Is there an easier way to perform peak detection using a microcontroller?
You can still introduce detection of false peaks when the device is moved. This will be present whether you are timing average peak duration or applying an FFT (fast Fourier Transform).
With an FFT you should be able to ignore peaks outside the range of frequencies you are considering (ie those < 30 bpm and > 300 bpm, say).
As Kenny suggests, 8MHz might overwhelm a 26MHz chip. Any particular reason for such a high sampling rate?
Like some of the comments, I would also recommend lowering your sample rate since you only care about pulse (i.e. heart rate) for now. So, assuming you're going to be looking at resting heart rate, you'll be in the sub-1Hz to 2Hz range (60 BPM = 1Hz), depending on subject health, age, etc.
In order to isolate the frequency range of interest, I would also recommend a simple, low-order digital filter. If you have access to Matlab, you can play around with Digital Filter Design using its Filter Design and Analysis Tool (Introduction to the FDATool). As you'll find out, Digital Filtering (wiki) is not computationally expensive since it is a matter of multiplication and addition.
To answer the detection part of your question, YES, it is certainly feasible to implement peak detection on the plethysmograph waveform within a microcontroller. Taking your example, a slope-based peak detection algorithm would operate on your waveform data, searching for changes in slope, essentially where the slope waveform crosses zero.
Here are a few other things to consider about your application:
Calculating slope can have a "spread" (i.e. do you find the slope between adjacent samples, or samples which are a few samples apart?)
What if your peak detection algorithm locates peaks that are too close together, or too far apart, in a physiological sense?
A Pulse Oximeter (wiki) often utilizes LEDs which emit Red and Infrared light. How does the frequency of the LED affect the plethysmograph? (HINT: It may not be significant, but I believe you'll find one wavelength to yield greater amplitudes in your frequency range of interest.)
Of course you'll find a variety of potential algorithms if you do a literature search but I think slope-based detection is great for its simplicity. Hope it helps.
If you can detect the period using zero crossing, even at 10x oversampling of 10 Hz, you can use a line fit of the quick-n-dirty-edge to find the exact period, and then subtract the new wave's samples in that period with the previous, and get a DC offset. The period measurement will have the precision of your sample rate. Doing operations on the time and amplitude-normalized data will be much easier.
This idea is computationally light compared to FFT, which still needs additional data processing.
What is a simple way to see if my low-pass filter is working? I'm in the process of designing a low-pass filter and would like to run tests on it in a relatively straight forward manner.
Presently I open up a WAV file and stick all the samples in a array of ints. I then run the array through the low-pass filter to create a new array. What would an easy way to check if the low-pass filter worked?
All of this is done in C.
You can use a broadband signal such as white noise to measure the frequency response:
generate white noise input signal
pass white noise signal through filter
take FFT of output from filter
compute log magnitude of FFT
plot log magnitude
Rather than coding this all up you can just dump the output from the filter to a text file and then do the analysis in e.g. MATLAB or Octave (hint: use periodogram).
Depends on what you want to test. I'm not a DSP expert, but I know there are different things one could measure about your filter (if that's what you mean by testing).
If the filter is linear then all information of the filter can be found in the impulse response. Read about it here: http://en.wikipedia.org/wiki/Linear_filter
E.g. if you take the Fourier transform of the impulse response, you'll get the frequency response. The frequency response easily tells you if the low-pass filter is worth it's name.
Maybe I underestimate your knowledge about DSP, but I recommend you to read the book on this website: http://www.dspguide.com. It's a very accessible book without difficult math. It's available as a real book, but you can also read it online for free.
EDIT: After reading it I'm convinced that every programmer that ever touches an ADC should definitely have read this book first. I discovered that I did a lot of things the difficult way in past projects that I could have done a thousand times better when I had a little bit more knowledge about DSP. Most of the times an unexperienced programmer is doing DSP without knowing it.
Create two monotone signals, one of a low frequency and one of a high frequency. Then run your filter on the two. If it works, then the low frequency signal should be unmodified whereas the high frequency signal will be filtered out.
Like Bart above mentioned.
If it's LTI system, I would insert impulse and record the samples and perform FFT using matlab and plot magnitude.
You ask why?
In time domain, you have to convolute the input x(t) with the impulse response d(t) to get the transfer function which is tedious.
y(t) = x(t) * d(t)
In frequency domain, convolution becomes simple multiplication.
y(s) = x(s) x d(s)
So, transfer function is y(s)/x(s) = d(s).
That's the reason you take FFT of impulse response to see the behavior of the filter.
You should be able to programmatically generate tones (sine waves) of various frequencies, stuff them into the input array, and then compare the signal energy by summing the squared values of the arrays (and dividing by the length, though that's mathematically not necessary here because the signals should be the same length). The ratio of the output energy to the input energy gives you the filter gain. If your LPF is working correctly, the gain should be close to 1 for low frequencies, close to 0.5 at the bandwidth frequency, and close to zero for high frequencies.
A note: There are various (but essentially the same in spirit) definitions of "bandwidth" and "gain". The method I've suggested should be relatively insensitive to the transient response of the filter because it's essentially averaging the intensity of the signal, though you could improve it by ignoring the first T samples of the input, where T is related to the filter bandwidth. Either way, make sure that the signals are long compared to the inverse of the filter bandwidth.
When I check a digital filter, I compute the magnitude response graph for the filter and plot it. I then generate a linear sweeping sine wave in code or using Audacity, and pass the sweeping sine wave through the filter (taking into account that things might get louder, so the sine wave is quiet enough not to clip) . A visual check is usually enough to assert that the filter is doing what I think it should. If you don't know how to compute the magnitude response I suspect there are tools out there that will compute it for you.
Depending on how certain you want to be, you don't even have to do that. You can just process the linear sweep and see that it attenuated the higher frequencies.
I am trying to work on a system where the quality of a recorded sentence is rated by a computer. There are three modes under which this system operates:
When the person records a sentence using a mic and mixer arrangement.
When the user records over a landline.
When the user records over a mobile phone.
I notice that the scores I get from recordings using the above 3 sources are in the following order: Mic_score > Landline_score > mobile_score
It is likely that the above order is because of the effects of the codecs and channel characteristics. My question is:
What can be done to compensate for channel/codec introduced artifacts to get consistent scores across channels? If some sort of inverse filtering, then please provide some links where I could get started.
How do I detect what channel the input speech has been recorded on? Use HMMs?
Edit 1: I am not at liberty to go into the details of the criteria. The current scores that I get from the mic, landline and mobile (for the same sentence said (and similarly spoken over the three mediums) is something like 80, 66, 41. This difference may be because of the channel effects. If the content and manner of speaking the sentence is the same, then I am looking for an algorithm that normalizes the scores (they need not be the same, but they should be close).
It may very well be that the sound quality is different.
Have you tried listening to some examples?
You can also use any spectrum analyzer to look at that data in detail. I suggest http://www.baudline.com/. Things your should look out for: Distance between the noise floor and the speech.
Also look at the high frequency noise bursts when the letters t, f and s are spoken. In low quality lines the difference between these letters disappears.
Why do you want to skew the quality measures? Giving an objective response of the quality seems to make more sense.
The landline codec will remove all frequencies around and above 4 kHz. The cell phone codec will throw away more information as part of a lossy compression process. Unless you have another side channel of information regarding the original audio content, there is no reliable way to recover the audio that was thrown away.
You best bet to normalize is to low pass filter the audio to match the 8 kHz telco codec, and the run the result through some cellular standard compression algorithm (there may be one published for your particular mobile cellular protocol). This should reduce the quality of all 3 signals to about the same.