This is my first post on SO. I haven't already developed much code for embedded systems, but I have few problems and need help from more advanced programmers. I use following devices:
- LandTiger board with LPC1768 (Cortex M3) MCU,
- Digilent pmodACL with ADXL345 accelerometer (3 axis),
- Digilent pmodGYRO with L3G4200D gyroscope (3 axis).
I would like to get some information about device orientation, i.e. rotation angles over X, Y and Z axes. I've read that in order to achieve this I need to combine data from both accelerometer and gyroscope using Kallman filter or its simpler form i.e. complementary filter. I would like to know if it's possible to count roll, pitch and yaw from full range (0-360 degrees) using measurment data only from gyroscope and accelerometer (without magnetometer). I've also found some mathematical formulas (http://www.ewerksinc.com/refdocs/Tilt%20Sensing%20with%20LA.pdf and http://www.freescale.com/files/sensors/doc/app_note/AN3461.pdf) but they contain root squares in numerators/denominators so the information about proper quadrant of coordinate system is lost.
The question you are asking is a fairly frequent one, and is fairly complex, with many different solutions.
Although the title mentions only an accelerometer, the body of your post mentions a gyroscope, so I will assume you have both. In addition there are many steps to getting low-cost accelerometers and gyros to work, one of those is to do the voltage-to-measurement conversion. I will not cover that part.
First and foremost I will answer your question.
You asked if by 'counting' the gyro measurements you can estimate the attitude (orientation) of the device in Euler Angles.
Short answer: Yes, you could sum the scaled gyro measurements to get a very noisy estimate of the device rotation (actual radians turned, not attitude), however it would fail as soon as you rotate more than one axis. This will not suffice for most applications.
I will provide you with some general advise, specific knowledge and some example code that I have used before.
Firstly, you should not try to solve this problem by writing a program and testing with your IMU. You should start by writing a simulation using validated libraries, then validate your algorithm/program, and only then try to implement it with the IMU.
Secondly, you say you want to "count roll, pitch and yaw from full range (0-360 degrees)".
From this I assume you mean you want to be able to determine the Euler Angles that represent the attitude of the device with respect to an external stationary North-East-Down (NED) frame.
Your statement makes me think you are not familiar with representations of attitude, because as far as I know there are no Euler Angle representations with all 3 angles in the 0-360 range.
The application for which you want to use the attitude of the device will be important. If you are using Euler Angles you will not be able to accurately track the attitude of the device when large (greater than around 50 degrees) rotations are made on the roll or pitch axes, due to what is known as Gimbal Lock.
If you require the tracking of such motions then you will need to use a quaternion or Direction Cosine Matrix (DCM) representation of attitude.
Thirdly, as you have said you can use a Complimentary Filter or Kalman Filter variant (Extended Kalman Filter, Error-State Kalman Filter, Indirect Kalman Filter) to accurately track the attitude of the device by fusing the data from the accelerometer, gyro and a magnetometer. I suggest the Complimentary Filter described by Madgwick which is implemented in C, C# and MATLAB here. A Kalman Filter variant would be necessary if you wanted to track the position of the device, and had an additional sensor such as GPS.
For some example code of mine using accelerometer only to get Euler Angle pitch and roll see my answer to this other question.
Related
Selecting a random color on a computer is a touch harder than I thought it would be.
The naive way of uniform random sampling of 0..255 for R,G,B will tend to draw lots of similar greens. It would make sense to sample from a perceptually uniform space like CIELUV.
A simple way to do this is to sample L,u,v on a regular mesh and ensure the color solid is contained in the bounds (I've seen different bounds for this). If the sample falls outside embedded RGB solid (tested by mapping it XYZ then RGB), reject it and sample again. You can settle for a kludgy-but-guaranteed-to-terminate "bailout" selection (like the naive procedure) if you reject more then some arbitrary threshold number of times.
Testing if the sample lies within RGB needs to be sure to test for the special case of black (some implementations end up being silent on the divide by zero), I believe. If L=0 and either u!=0 or v!=0, then the sample needs to be rejected or else you would end up oversampling the L=0 plane in Luv space.
Does this procedure have an obvious flaw? It seems to work but I did notice that I was rolling black more often than I thought made sense until I thought about what was happening in that case. Can anyone point me to the right bounds on the CIELUV grid to ensure that I am enclosing the RGB solid?
A useful reference for those who don't know it:
https://www.easyrgb.com/en/math.php
The key problem with this is that you need bounds to reject samples that fall outside of RGB. I was able to find it worked out here (nice demo on page, API provides convenient functions):
https://www.hsluv.org/
A few things I noticed with uniform sampling of CIELUV in RGB:
most colors are green and purple (this is true independent of RGB bounds)
you have a hard time sampling what we think of as yellow (very small volume of high lightness, high chroma space)
I implemented various strategies that focus on sampling hues (which is really what we want when we think of "sampling colors") by weighting according to the maximum chromas at that lightness. This makes colors like chromatic light yellows easier to catch and avoids oversampling greens and purples. You can see these methods in actions here (select "randomize colors"):
https://www.mysticsymbolic.art/
Source for color randomizers here:
https://github.com/mittimithai/mystic-symbolic/blob/chromacorners/lib/random-colors.ts
Okay, while you don't show the code you are using to generate the random numbers and then apply them to the CIELUV color space, I'm going to guess that you are creating a random number 0.0-100.0 from a random number generator, and then just assigning it to L*.
That will most likely give you a lot of black or very dark results.
Let Me Explain
L* of L * u * v* is not linear as to light. Y of CIEXYZ is linear as to light. L* is perceptual lightness, so an exponential curve is applied to Y to make it linear to perception but then non-linear as to light.
TRY THIS
To get L* with a random value 0—100:
Generate a random number between 0.0 and 1.0
Then apply an exponent of 0.42
Then multiply by 100 to get L*
Lstar = Math.pow(Math.random(), 0.42) * 100;
This takes your random number that represents light, and applies a powercurve that emulates human lightness perception.
UV Color
As for the u and v values, you can probably just leave them as linear random numbers. Constrain u to about -84 and +176, and v to about -132.5 and +107.5
Urnd = (Math.random() - 0.5521) * 240;
Vrnd = (Math.random() - 0.3231) * 260;
Polar Color
It might be interesting converting uv to LChLUV or LshLUV
For hue, it's probably as simple as H = Math.random() * 360
For chroma contrained 0—178: C = Math.random() * 178
The next question is, should you find chroma? Or saturation? CIELUV can provide either Hue or Sat — but for directly generating random colors, it seems that chroma is a bit better.
And of course these simple examples are not preventing over-runs, so they color values to be tested to see if they are legal sRGB or not. There's a few things that can be done to constrain the generated values to legal colors, but the object here was to get you to a better distribution without excess black/dark results.
Please let me know of any questions.
I'm developing an IOS app for frequency detection, and I'm using the YIN algorithm, which is very precise: witch Audacity, I've generated rectangular waves of different frequencies - and my algorithm has a precision of about 0.1 % - for example generating a tone of 82,4 Hz (E string), I really get 82,4 Hz and nothing else.
Anyhow, when I strum a guitar string, I often get overtones which sometimes can be stronger (with a higher amplitude) than the fundamental tone (F0). Consequently, my display starts "dancing" and toggling - sometimes, it even occurs that (when the tone dies out) my algorithm stops at the overtone's frequency (for example A instead of E) - so the user has to strum the string again in oder to see if his desired tone (frequency) is present.
I know that this phenomena has nothing to do with my algorithm, because it's merely a "hardware" problem (I mean the guitar which simply produces overtones).
I've tried in vain to smooth the results (of the frequency detection) or to "snap" to a fixed frequency as soon as a crucial frequency (for example 82.4 Hz for E string +/- tolerance) has been detected. Anyhow, it often occurrs that my algorithm snaps into an erroneous frequency, as well.
I'm asking myself how cheap guitar tuners (for 10$ in guitar stores) are working, as their frequency detections are reliable and stable, as well.
I don't want to change the algorithm, but two possible solutions come into my mind:
Preprocessing of the signal (maybe Hanning window, lowpass or bandpass filtering) and/or
Postprocessing of the signal (some kind of frequency smoothing).
Has someone an idea how to overcome the "choppy" results?
I used autocorrelation for my free chromatic app iTransposer and incorporated a Hanning window so this may help you. I wasn't looking for accuracy initially as I wanted to display the note on a stave not a meter. However a friend of mine tested it to 0.1 Hz with a signal generator at his work and had issues over 383 Hz with simple signals such as Sine waves.I've tried it with various brass instruments, guitar and Garageband instruments seems to be OK for tuning.
Basically I implemented this http://www.ucl.ac.uk/~ucjt465/tutorials/praatpitch.html
using VDSP and updated a sample project supplied by Kevin P Murphy https://github.com/kevmdev/PitchDetectorExample
This question is about interpolating sine wave oscillators:
Assuming that amplitude and frequency trajectories for a sine wave are defined by corresponding breakpoint functions or read from user interface, the following few lines of C-code show a common sine-wave oscillator paradigm for synthesizing inNumberFrames of mono audio samples in real time, using linear interpolation:
// ... (pre-computing initial amplitude and phase values)...
for (UInt32 frame = 0; frame < inNumberFrames; frame++)
{
buffer[frame] = sinf(phasef) * ampf[frame];
phasef += osc->previousPartialPhaseIncrementf + df*frame;
if (phasef > TWO_PI) phasef -= TWO_PI;
}
// ... (storing current amplitude and phase values)...
While musically satisfying in general (although it can be performance optimized using pre-computed sine wavetables and pointer arithmetics), there are occasions when linear interpolation artifacts can be heard. I'd like to know is there a free example of cubic or bicubic interpolation of amplitude and phase oscillator instantaneous values? Concerning that the render thread has real-time priority (at least in CoreAudio), it is important to keep it lightweight, also to avoid running into too many lower-priority threading issues if interpolating outside the render thread. I'd be thankful to anyone pointing me at a working example of a (bi)cubic interpolation sine-wave oscillator algorithm in C, no matter how simple (or complex) it is.
Thanks in advance.
UPDATE:
Perhaps this illustration can clarify what was meant by values to be interpolated. Purple dots represent a frequency envelope breakpoint curve (connected by linear interpolation). Cyan dots represent a possibility of superimposed polynomial interpolations. First and last segments are off-scale:
Have a look at musicdsp.org where there is a post on (almost) Ready-to-use oscillators. The end of the post contains the method that you might be interested in with the following signature (by Ollie N.)
float Oscillator::UpdateWithCubicInterpolation( float frequency )
I have implemented SIFT in opencv for comparing images... i have not yet written the program for comparing.Thinking of using FLANN for the same.But,my problem is that,looking into the 128 elements of the descriptor,cannot really understand the similarity of an image and its rotated version.
By reading Lowe's paper,i do understand that the descriptor co-ordinates are all rotated in terms of the keypoint orientation...but,how exactly is the similarity obtained.Can we undertstand the similarity by just viewing the 128 values.
pls,help me...this is for my project presentation.
You can first use Lowe's metric to compute some putative matches between the two images. The metric is that for any given descriptor de in image 1, find the distance to all descriptors de' in image 2. If the ratio of the closest distance to the second closest distance is below a threshold, then accept it.
After this, you can do RANSAC or other form of robust estimation or Hough Transform to check geometric consistency in terms of position, orientation, and scale of the keypoints that you accepted as putative matches.
If I recall correctly, SIFT will give you a set of 128-value descriptors that describe each of the interest points. You also have the location of each point in each of the images, as well as its "direction" (I forget what the "direction" is called in the paper) and scale in each image.
Once you've found two points that have matching descriptors, you can calculate the transformation from the interest point in one image to the same point in the other image by comparing coordinates and directions.
If you have enough matches, you see if all (or a majority of) the interest points have the same transformation. If they do, the images are similar, if they don't, the images are different.
Hope this helps...
What you are looking for is basically ASIFT
You can find the code here and some overview
The sensor module in my project consists of a rotating camera, that collects noisy information about moving objects in the surrounding environment.
The information consists of distance, angle and relative change of the moving objects..
The limiting view range of the camera makes it essential to rotate the camera periodically to update environment information...
I was looking for algorithms / ways to model these information, in order to be able to guess / predict / learn motion properties of these object..
My current proposed idea is to store last n snapshots of each object in a queue. I take weighted average of positions and velocities of moving object, but I think it is a poor method...
Can you state some titles that suit this case?
Thanks
Kalman {Extended, unscented, ... } filters and particle filters only after reading about Kalman filters.
Kalman filters learn and predict the correct data from noisy data with a Gaussian assumption, so it may be of use to you. If you need non-Gaussian methods, look at the particle filter.