Source code for Xiaolin Wu's line algorithm in C? - c

I'm looking for a nice and efficient implementation of Xiaolin Wu's anti-aliased line drawing algorithm in C, does anyone have this code they could share with me?
Thanks

Wikipedia has pseudo code.
Google has many examples like this one or this one. And your question reminded me this nice article on antialiasing.
EDIT: It's time to discover Hugo Helias's website if you don't know it already.

Wondering if implementation here is correct, because in
ErrorAdj = ((unsigned long) DeltaX << 16) / (unsigned long) DeltaY;
/* Draw all pixels other than the first and last */
while (--DeltaY) {
ErrorAccTemp = ErrorAcc; /* remember current accumulated error */
ErrorAcc += ErrorAdj; /* calculate error for next pixel */
if (ErrorAcc <= ErrorAccTemp) {
/* The error accumulator turned over, so advance the X coord */
X0 += XDir;
}
Y0++; /* Y-major, so always advance Y */
/* The IntensityBits most significant bits of ErrorAcc give us the
intensity weighting for this pixel, and the complement of the
weighting for the paired pixel */
Weighting = ErrorAcc >> IntensityShift;
DrawPixel(pDC,X0, Y0, BaseColor + Weighting);
DrawPixel(pDC,X0 + XDir, Y0,
BaseColor + (Weighting ^ WeightingComplementMask));
}
condition if (ErrorAcc <= ErrorAccTemp) is always false.

Related

CORDIC Arcsine implementation fails

I have recently implemented a library of CORDIC functions to reduce the required computational power (my project is based on a PowerPC and is extremely strict in its execution time specifications). The language is ANSI-C.
The other functions (sin/cos/atan) work within accuracy limits both in 32 and in 64 bit implementations.
Unfortunately, the asin() function fails systematically for certain inputs.
For testing purposes I have implemented an .h file to be used in a simulink S-Function. (This is only for my convenience, you can compile the following as a standalone .exe with minimal changes)
Note: I have forced 32 iterations because I am working in 32 bit precision and the maximum possible accuracy is required.
Cordic.h:
#include <stdio.h>
#include <stdlib.h>
#define FLOAT32 float
#define INT32 signed long int
#define BIT_XOR ^
#define CORDIC_1K_32 0x26DD3B6A
#define MUL_32 1073741824.0F /*needed to scale float -> int*/
#define INV_MUL_32 9.313225746E-10F /*needed to scale int -> float*/
INT32 CORDIC_CTAB_32 [] = {0x3243f6a8, 0x1dac6705, 0x0fadbafc, 0x07f56ea6, 0x03feab76, 0x01ffd55b, 0x00fffaaa, 0x007fff55,
0x003fffea, 0x001ffffd, 0x000fffff, 0x0007ffff, 0x0003ffff, 0x0001ffff, 0x0000ffff, 0x00007fff,
0x00003fff, 0x00001fff, 0x00000fff, 0x000007ff, 0x000003ff, 0x000001ff, 0x000000ff, 0x0000007f,
0x0000003f, 0x0000001f, 0x0000000f, 0x00000008, 0x00000004, 0x00000002, 0x00000001, 0x00000000};
/* CORDIC Arcsine Core: vectoring mode */
INT32 CORDIC_asin(INT32 arc_in)
{
INT32 k;
INT32 d;
INT32 tx;
INT32 ty;
INT32 x;
INT32 y;
INT32 z;
x=CORDIC_1K_32;
y=0;
z=0;
for (k=0; k<32; ++k)
{
d = (arc_in - y)>>(31);
tx = x - (((y>>k) BIT_XOR d) - d);
ty = y + (((x>>k) BIT_XOR d) - d);
z += ((CORDIC_CTAB_32[k] BIT_XOR d) - d);
x = tx;
y = ty;
}
return z;
}
/* Wrapper function for scaling in-out of cordic core*/
FLOAT32 asin_wrap(FLOAT32 arc)
{
return ((FLOAT32)(CORDIC_asin((INT32)(arc*MUL_32))*INV_MUL_32));
}
This can be called in a manner similar to:
#include "Cordic.h"
#include "math.h"
void main()
{
y1 = asin_wrap(value_32); /*my implementation*/
y2 = asinf(value_32); /*standard math.h for comparison*/
}
The results are as shown:
Top left shows the [-1;1] input over 2000 steps (0.001 increments), bottom left the output of my function, bottom right the standard output and top right the difference of the two outputs.
It is immediate to see that the error is not within 32 bit accuracy.
I have analysed the steps performed (and the intermediate results) by my code and it seems to me that at a certain point the value of y is "close enough" to the initial value of arc_in and what could be related to a bit-shift causes the solution to diverge.
My questions:
I am at a loss, is this error inherent in the CORDIC implementation or have I made a mistake in the implementation? I was expecting the decrease of accuracy near the extremes, but those spikes in the middle are quite unexpected. (the most notable ones are just beyond +/- 0.6, but even removed these there are more at smaller values, albeit not as pronounced)
If it is something part of the CORDIC implementation, are there known workarounds?
EDIT:
Since some comment mention it, yes, I tested the definition of INT32, even writing
#define INT32 int32_T
does not change the results by the slightest amount.
The computation time on the target hardware has been measured by hundreds of repetitions of block of 10.000 iterations of the function with random input in the validity range. The observed mean results (for one call of the function) are as follows:
math.h asinf() 100.00 microseconds
CORDIC asin() 5.15 microseconds
(apparently the previous test had been faulty, a new cross-test has obtained no better than an average of 100 microseconds across the validity range)
I apparently found a better implementation. It can be downloaded in matlab version here and in C here. I will analyse more its inner workings and report later.
To review a few things mentioned in the comments:
The given code outputs values identical to another CORDIC implementation. This includes the stated inaccuracies.
The largest error is as you approach arcsin(1).
The second largest error is that the values of arcsin(0.60726) to arcsin(0.68514) all return 0.754805.
There are some vague references to inaccuracies in the CORDIC method for some functions including arcsin. The given solution is to perform "double-iterations" although I have been unable to get this to work (all values give a large amount of error).
The alternate CORDIC implemention has a comment /* |a| < 0.98 */ in the arcsin() implementation which would seem to reinforce that there is known inaccuracies close to 1.
As a rough comparison of a few different methods consider the following results (all tests performed on a desktop, Windows7 computer using MSVC++ 2010, benchmarks timed using 10M iterations over the arcsin() range 0-1):
Question CORDIC Code: 1050 ms, 0.008 avg error, 0.173 max error
Alternate CORDIC Code (ref): 2600 ms, 0.008 avg error, 0.173 max error
atan() CORDIC Code: 2900 ms, 0.21 avg error, 0.28 max error
CORDIC Using Double-Iterations: 4700 ms, 0.26 avg error, 0.917 max error (???)
Math Built-in asin(): 200 ms, 0 avg error, 0 max error
Rational Approximation (ref): 250 ms, 0.21 avg error, 0.26 max error
Linear Table Lookup (see below) 100 ms, 0.000001 avg error, 0.00003 max error
Taylor Series (7th power, ref): 300 ms, 0.01 avg error, 0.16 max error
These results are on a desktop so how relevant they would be for an embedded system is a good question. If in doubt, profiling/benchmarking on the relevant system would be advised. Most solutions tested don't have very good accuracy over the range (0-1) and all but one are actually slower than the built-in asin() function.
The linear table lookup code is posted below and is my usual method for any expensive mathematical function when speed is desired over accuracy. It simply uses a 1024 element table with linear interpolation. It seems to be both the fastest and most accurate of all methods tested, although the built-in asin() is not much slower really (test it!). It can easily be adjusted for more or less accuracy by changing the size of the table.
// Please test this code before using in anything important!
const size_t ASIN_TABLE_SIZE = 1024;
double asin_table[ASIN_TABLE_SIZE];
int init_asin_table (void)
{
for (size_t i = 0; i < ASIN_TABLE_SIZE; ++i)
{
float f = (float) i / ASIN_TABLE_SIZE;
asin_table[i] = asin(f);
}
return 0;
}
double asin_table (double a)
{
static int s_Init = init_asin_table(); // Call automatically the first time or call it manually
double sign = 1.0;
if (a < 0)
{
a = -a;
sign = -1.0;
}
if (a > 1) return 0;
double fi = a * ASIN_TABLE_SIZE;
double decimal = fi - (int)fi;
size_t i = fi;
if (i >= ASIN_TABLE_SIZE-1) return Sign * 3.14159265359/2;
return Sign * ((1.0 - decimal)*asin_table[i] + decimal*asin_table[i+1]);
}
The "single rotate" arcsine goes badly wrong when the argument is just greater than the initial value of 'x', where that is the magical scaling factor -- 1/An ~= 0.607252935 ~= 0x26DD3B6A.
This is because, for all arguments > 0, the first step always has y = 0 < arg, so d = +1, which sets y = 1/An, and leaves x = 1/An. Looking at the second step:
if arg <= 1/An, then d = -1, and the steps which follow converge to a good answer
if arg > 1/An, then d = +1, and this step moves further away from the right answer, and for a range of values a little bigger than 1/An, the subsequent steps all have d = -1, but are unable to correct the result :-(
I found:
arg = 0.607 (ie 0x26D91687), relative error 7.139E-09 -- OK
arg = 0.608 (ie 0x26E978D5), relative error 1.550E-01 -- APALLING !!
arg = 0.685 (ie 0x2BD70A3D), relative error 2.667E-04 -- BAD !!
arg = 0.686 (ie 0x2BE76C8B), relative error 1.232E-09 -- OK, again
The descriptions of the method warn about abs(arg) >= 0.98 (or so), and I found that somewhere after 0.986 the process fails to converge and the relative error jumps to ~5E-02 and hits 1E-01 (!!) at arg=1 :-(
As you did, I also found that for 0.303 < arg < 0.313 the relative error jumps to ~3E-02, and reduces slowly until things return to normal. (In this case step 2 overshoots so far that the remaining steps cannot correct it.)
So... the single rotate CORDIC for arcsine looks rubbish to me :-(
Added later... when I looked even closer at the single rotate CORDIC, I found many more small regions where the relative error is BAD...
...so I would not touch this as a method at all... it's not just rubbish, it's useless.
BTW: I thoroughly recommend "Software Manual for the Elementary Functions", William Cody and William Waite, Prentice-Hall, 1980. The methods for calculating the functions are not so interesting any more (but there is a thorough, practical discussion of the relevant range-reductions required). However, for each function they give a good test procedure.
The additional source I linked at the end of the question apparently contains the solution.
The proposed code can be reduced to the following:
#define M_PI_2_32 1.57079632F
#define SQRT2_2 7.071067811865476e-001F /* sin(45°) = cos(45°) = sqrt(2)/2 */
FLOAT32 angles[] = {
7.8539816339744830962E-01F, 4.6364760900080611621E-01F, 2.4497866312686415417E-01F, 1.2435499454676143503E-01F,
6.2418809995957348474E-02F, 3.1239833430268276254E-02F, 1.5623728620476830803E-02F, 7.8123410601011112965E-03F,
3.9062301319669718276E-03F, 1.9531225164788186851E-03F, 9.7656218955931943040E-04F, 4.8828121119489827547E-04F,
2.4414062014936176402E-04F, 1.2207031189367020424E-04F, 6.1035156174208775022E-05F, 3.0517578115526096862E-05F,
1.5258789061315762107E-05F, 7.6293945311019702634E-06F, 3.8146972656064962829E-06F, 1.9073486328101870354E-06F,
9.5367431640596087942E-07F, 4.7683715820308885993E-07F, 2.3841857910155798249E-07F, 1.1920928955078068531E-07F,
5.9604644775390554414E-08F, 2.9802322387695303677E-08F, 1.4901161193847655147E-08F, 7.4505805969238279871E-09F,
3.7252902984619140453E-09F, 1.8626451492309570291E-09F, 9.3132257461547851536E-10F, 4.6566128730773925778E-10F};
FLOAT32 arcsin_cordic(FLOAT32 t)
{
INT32 i;
INT32 j;
INT32 flip;
FLOAT32 poweroftwo;
FLOAT32 sigma;
FLOAT32 sign_or;
FLOAT32 theta;
FLOAT32 x1;
FLOAT32 x2;
FLOAT32 y1;
FLOAT32 y2;
flip = 0;
theta = 0.0F;
x1 = 1.0F;
y1 = 0.0F;
poweroftwo = 1.0F;
/* If the angle is small, use the small angle approximation */
if ((t >= -0.002F) && (t <= 0.002F))
{
return t;
}
if (t >= 0.0F)
{
sign_or = 1.0F;
}
else
{
sign_or = -1.0F;
}
/* The inv_sqrt() is the famous Fast Inverse Square Root from the Quake 3 engine
here used with 3 (!!) Newton iterations */
if ((t >= SQRT2_2) || (t <= -SQRT2_2))
{
t = 1.0F/inv_sqrt(1-t*t);
flip = 1;
}
if (t>=0.0F)
{
sign_or = 1.0F;
}
else
{
sign_or = -1.0F;
}
for ( j = 0; j < 32; j++ )
{
if (y1 > t)
{
sigma = -1.0F;
}
else
{
sigma = 1.0F;
}
/* Here a double iteration is done */
x2 = x1 - (sigma * poweroftwo * y1);
y2 = (sigma * poweroftwo * x1) + y1;
x1 = x2 - (sigma * poweroftwo * y2);
y1 = (sigma * poweroftwo * x2) + y2;
theta += 2.0F * sigma * angles[j];
t *= (1.0F + poweroftwo * poweroftwo);
poweroftwo *= 0.5F;
}
/* Remove bias */
theta -= sign_or*4.85E-8F;
if (flip)
{
theta = sign_or*(M_PI_2_32-theta);
}
return theta;
}
The following is to be noted:
It is a "Double-Iteration" CORDIC implementation.
The angles table thus differs in construction from the old table.
And the computation is done in floating point notation, this will cause a major increase in computation time on the target hardware.
A small bias is present in the output, removed via the theta -= sign_or*4.85E-8F; passage.
The following picture shows the absolute (left) and relative errors (right) of the old implementation (top) vs the implementation contained in this answer (bottom).
The relative error is obtained only by dividing the CORDIC output with the output of the built-in math.h implementation. It is plotted around 1 and not 0 for this reason.
The peak relative error (when not dividing by zero) is 1.0728836e-006.
The average relative error is 2.0253509e-007 (almost in accordance to 32 bit accuracy).
For convergence of iterative process it is necessary that any "wrong" i-th
iteration could be "corrected" in the subsequent (i+1)-th, (i+2)-th, (i+3)-th,
etc. etc. iterations. Or, in other words, at least a half of the "wrong"
i-th iteration could be corrected in the next (i+1)-th iteration.
For atan(1/2^i) this condition is satisfied, i.e.:
atan(1/2^(i+1)) > 1/2*atan(1/2^i)
Read more at
http://cordic-bibliography.blogspot.com/p/double-iterations-in-cordic.html
and:
http://baykov.de/CORDIC1972.htm
(note I'm the author of those pages)

Noise-cancellation with undirected microphones and different amplifiers

I have the following setup https://sketchfab.com/show/7e2912f5f8794a7b96ef3ac5930e090a (It's a 3d viewer, use your mouse to view all angles)
The box has two nondirectional electret microphones(black dots). On the ground there are some elements falling down like water or similar(symbolized by the sphere) and creating noises. On top, someone is speaking in the box. Distances are roughly accurate, so the mouth is pretty close.
Inside the box there are two different amplifiers(but the same electret microphones) with two different amplification circuits(the mouth-one is louder in general and has some normalization circuitry integrated. Long story short, I can record this into a raw audio file with 44100 Hz, 16Bit and Stereo, while the left channel is the upper, the right channel is the lower microphone amplifier output.
Goal is to - even though the electret microphones are not directed and even though there are different amplifiers - subtract the lower microphone(facing the ground) from the upper microphone(facing the speaker) to have noise cancellation.
I tried(With Datei being the raw-filename). This includes a high or low pass filter and a routine to put the final result back into a raw mono file (%s.neu.raw)
The problem is - well - undefinable distortion. I can hear my voice but it's not bearable at all. If you need a sample I can upload one.
EDIT: New code.
static void *substractf( char *Datei)
{
char ergebnis[80];
sprintf(ergebnis,"%s.neu.raw",Datei);
FILE* ausgabe = fopen(ergebnis, "wb");
FILE* f = fopen(Datei, "rb");
if (f == NULL)
return;
double g = 0.1;
double RC = 1.0/(1215*2*3.14);
double dt = 1.0/44100;
double alpha = dt/(RC+dt);
double noise_gain = 18.0;
double voice_gain = 1.0;
struct {
uint8_t noise_lsb;
int8_t noise_msb;
uint8_t voice_lsb;
int8_t voice_msb;
} sample;
while (fread(&sample, sizeof sample, 1, f) == 1)
{
int16_t noise_source = sample.noise_msb * 256 + sample.noise_lsb;
int16_t voice_source = sample.voice_msb * 256 + sample.voice_lsb;
double signal, difference_voice_noise;
difference_voice_noise = voice_gain*voice_source - noise_gain*noise_source;
signal = (1.0 - alpha)*signal + alpha*difference_voice_noise;
putc((char) ( (signed)signal & 0xff),ausgabe);
putc((char) (((signed)signal >> 8) & 0xff),ausgabe);
}
fclose(f);
fclose(ausgabe);
char output[300];
sprintf(output,"rm -frv \"%s\"",Datei);
system(output);
}
Your code doesn't take differences of path length into consideration.
The path difference d2 – d1 between the sound source and the two mics corresponds to a time delay of (d2 – d1) / v, where v is the speed of sound (330 m/s).
Suppose d2 – d1 is equal to 10 cm. In this case, any sound wave whose frequency is a multiple of 3300 Hz (i.e., whose period is a multiple of (0.10/330) seconds) will be at exactly the same phase at both microphones. This is how you want things to be at all frequencies.
However, a sound wave at an odd multiple of half that frequency (1650 Hz, 4950 Hz, 8250 Hz, etc.) will have changed in phase by 180° by the time it reaches the second mic. As a result, your subtraction operation will actually have the opposite effect — you'll be boosting these frequencies instead of making them quieter.
The end result will be similar to what you get if you push all the alternate sliders on a graphic equaliser in opposite directions. This is probably what you're experiencing now.
Try estimating the length of this path difference and delaying the samples in one channel by a corresponding amount. At a sampling rate of 44100 Hz, one centimetre corresponds to about 0.75 samples. If the sound source is moving around, then things get a bit complicated. You'll have to find a way of estimating the path difference dynamically from the audio signals themselves.
Ideas too big for a comment.
1) Looks like OP is filtering the l signal jetzt = vorher + (alpha*(l - vorher)) and then subtracting the r with dif = r - g*jetzt. It seems to make more sense to subtract l and r first and apply that difference to the filter.
float signal = 0.0; (outside loop)
...
float dif;
// Differential (with gain adjustments)
dif = gain_l*l - gain_r*r;
// Low pass filter (I may have this backwards)
signal = (1.0 - alpha)*signal + alpha*dif;
// I am not certain if diff or signal should be written
// but testing limit would be useful.
if ((dif > 32767) || (dif < -32767)) report();
int16_t sig = dif;
// I see no reason for the following test
// if (dif != 0)
putc((char) ( (unsigned)dif & 0xff),ausgabe);
putc((char) (((unsigned)dif >> 8) & 0xff),ausgabe);
2) The byte splicing may be off. Suggested simplification
// This assumes incoming data is little endian,
// Maybe data is in big endian and _that_ is OP problem?
struct {
uint8_t l_lsb;
int8_t l_msb;
uint8_t r_lsb;
int8_t r_msb;
} sample;
...
while (fread(&sample, sizeof sample, 1, f) == 1) {
int16_t left = sample.l_msb * 256 + sample.l_lsb;
int16_t right = sample.r_msb * 256 + sample.r_lsb;
3) Use of float vs. double. Usually the more limiting float creates computational noise, but the magnitude of OP's complaint suggest that this issue is unlikely the problem. Still worth considering.
4) Endian of the 16-bit samples may be backwards. Further, depending on A/D encoding the samples may be 16-bit unsigned rather than 16-bit signed.
5) The phase of the 2 signals could be 180 out from each other due to wiring and mic pick-up. Is so try diff = gain_l*l + gain_r*r.

Noise cancellation setup - combining the microphone's signals intelligently [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
I built a noise cancellation setup with two microphones and two different microphone preamplifiers that go to two different channels of a stereo recording.
Here is a sample
http://filestore.to/?d=U5FN2IH96K
I tried
char ergebnis[80];
sprintf(ergebnis, "%s.neu.raw", Datei);
FILE* ausgabe = fopen(ergebnis, "wb");
FILE* f = fopen(Datei, "rb");
if (f == NULL)
{
return;
}
int i = -1;
int r1 = 0;
int r2 = 0;
int l1 = 0;
int l2 = 0;
int l = 0;
int r = 0;
int wo = 0;
int dif = 0;
while (wo != EOF)
{
wo = getc(f);
i++;
if (i == 0)
{
r1 = (unsigned)wo;
}
if (i == 1)
{
r2 = (unsigned)wo;
r = (r2 << 8) + r1; //r1 | r2 << 8;
}
if (i == 2)
{
l1 = (unsigned)wo;
}
if (i == 3)
{
l2 = (unsigned)wo;
l = (l2 << 8) + l1; //l1 | l2 << 8;
dif = r - (l * 2);
putc((char)( (unsigned)dif & 0xff), ausgabe);
putc((char)(((unsigned)dif >> 8) & 0xff), ausgabe);
i = -1;
}
}
when the magic happens in
dif = r - (l * 2);
But this does not eliminate the noise surrounding it, all it does is create crackling sounds.
How could I approach this task with my setup instead? I prefer practical solutions over "read this paper only the author of the paper understands".
While we are at it, how do I normalize the final mono output to make it as loud as possible without clipping?
I don't know why you would expect this
dif = r - (l * 2);
to cancel noise, but I can tell you why it "create[s] crackling sounds". The value in dif is often going to be out of range of 16-bit audio. When this happens, your simple conversion function:
putc((char)( (unsigned)dif & 0xff), ausgabe);
putc((char)(((unsigned)dif >> 8) & 0xff), ausgabe);
will fail. Instead of a smooth curve, your audio will jump from large positive to large negative values. If that confuses you, maybe this post will help.
Even if you solve that problem, a few things aren't clear, not the least of which is that for active noise canceling to work you usually assume that one mike provides a source of noise and the other provides signal + noise. Which is which in this case? Did you just place two mikes next to each other and hope to hear some sound source with less ambient noise after some simple arithmetic? That won't work, since they are both hearing different combinations of signal and noise (not just in amplitude, but time as well). So you need to answer 1. which mike is the source of signal and which is the source of noise? 2. what kind of noise are you trying to cancel? 3. what distinguishes the mikes in their ability to hear signal and noise? 4. etc.
Update: I'm still not clear on your setup, but here's something that might help:
You might have a setup where your signal is strong in one mike and weak in the other, and a noise is applied to both mikes. In all likelyhood, there will be signal leak into both mikes. Nevertheless, we will assume
l = noise1
r = signal + noise2
Note that I have not assumed the same noise values for l and r, this reflects the reality that the two mikes will be picking up different noise values due to time delays and other factors. However, it is often the case (and may or may not be the case in your setup) that noise1 and noise2 are correlated at low frequencies. Thus, if we have a low pass filter, lp, we can achieve some noise reduction in the low frequencies as follows:
out = r - lp(l) = signal + noise2 - lp(noise1)
This, of course, assumes that the noise level at l and r is the same, which it may or may not be, depending on your setup. You may want to leave a manual parameter for this purpose for manual tuning at the end:
out = r - g*lp(l)
where g is your tuning parameter and close to 1. I believe in some high-end noise reduction systems, g is constantly tuned automatically.
Selecting a cutoff frequency for your lp filter is all that remains. An approximation you could use is that the highest frequency you can cancel has a wavelength equal to 1/4 the distance between the mikes. Of course, I'm REALLY waving my arms with that, because it depends a lot on where the sound is coming from, how directional your mikes are and so on, but it's a starting point.
Sample calculation for mikes that are 3 inches apart:
Speed of sound = 13 397 inches / sec
desired wavelength = 4*3 inches = 12 inches
frequency = 13,397 / 12 = 1116 Hz
So your filter should have a cutoff frequency of 1116 Hz if the mikes are 3 inches apart.
Expect this setup to cancel a significant amount of your signal below the cutoff frequency as well, if there is bleed.

Identifying a trend in C - Micro controller sampling

I'm working on an MC68HC11 Microcontroller and have an analogue voltage signal going in that I have sampled. The scenario is a weighing machine, the large peaks are when the object hits the sensor and then it stabilises (which are the samples I want) and then peaks again before the object roles off.
The problem I'm having is figuring out a way for the program to detect this stable point and average it to produce an overall weight but can't figure out how :/. One way I have thought about doing is comparing previous values to see if there is not a large difference between them but I haven't had any success. Below is the C code that I am using:
#include <stdio.h>
#include <stdarg.h>
#include <iof1.h>
void main(void)
{
/* PORTA, DDRA, DDRG etc... are LEDs and switch ports */
unsigned char *paddr, *adctl, *adr1;
unsigned short i = 0;
unsigned short k = 0;
unsigned char switched = 1; /* is char the smallest data type? */
unsigned char data[2000];
DDRA = 0x00; /* All in */
DDRG = 0xff;
adctl = (unsigned char*) 0x30;
adr1 = (unsigned char*) 0x31;
*adctl = 0x20; /* single continuos scan */
while(1)
{
if(*adr1 > 40)
{
if(PORTA == 128) /* Debugging switch */
{
PORTG = 1;
}
else
{
PORTG = 0;
}
if(i < 2000)
{
while(((*adctl) & 0x80) == 0x00);
{
data[i] = *adr1;
}
/* if(i > 10 && (data[(i-10)] - data[i]) < 20) */
i++;
}
if(PORTA == switched)
{
PORTG = 31;
/* Print a delimeter so teemtalk can send to excel */
for(k=0;k<2000;k++)
{
printf("%d,",data[k]);
}
if(switched == 1) /*bitwise manipulation more efficient? */
{
switched = 0;
}
else
{
switched = 1;
}
PORTG = 0;
}
if(i >= 2000)
{
i = 0;
}
}
}
}
Look forward to hearing any suggestions :)
(The graph below shows how these values look, the red box is the area I would like to identify.
As you sample sequence has glitches (short lived transients) try to improve the hardware ie change layout, add decoupling, add filtering etc.
If that approach fails, then a median filter [1] of say five places long, which takes the last five samples, sorts them and outputs the middle one, so two samples of the transient have no effect on it's output. (seven places ...three transient)
Then a computationally efficient exponential averaging lowpass filter [2]
y(n) = y(n–1) + alpha[x(n) – y(n–1)]
choosing alpha (1/2^n, division with right shifts) to yield a time constant [3] of less than the underlying response (~50samples), but still filter out the noise. Increasing the effective fractional bits will avoid the quantizing issues.
With this improved sample sequence, thresholds and cycle count, can be applied to detect quiescent durations.
Additionally if the end of the quiescent period is always followed by a large, abrupt change then using a sample delay "array", enables the detection of the abrupt change but still have available the last of the quiescent samples for logging.
[1] http://en.wikipedia.org/wiki/Median_filter
[2] http://www.dsprelated.com/showarticle/72.php
[3] http://en.wikipedia.org/wiki/Time_constant
Note
Adding code for the above filtering operations will lower the maximum possible sample rate but printf can be substituted for something faster.
Continusously store the current value and the delta from the previous value.
Note when the delta is decreasing as the start of weight application to the scale
Note when the delta is increasing as the end of weight application to the scale
Take the X number of values with the small delta and average them
BTW, I'm sure this has been done 1M times before, I'm thinking that a search for scale PID or weight PID would find a lot of information.
Don't forget using ___delay_ms(XX) function somewhere between the reading values, if you will compare with the previous one. The difference in each step will be obviously small, if the code loop continuously.
Looking at your nice graphs, I would say you should look only for the falling edge, it is much consistent than leading edge.
In other words, let the samples accumulate, calculate the running average all the time with predefined window size, remember the deviation of the previous values just for reference, check for a large negative bump in your values (like absolute value ten times smaller then current running average), your running average is your value. You could go back a little bit (disregarding last few values in your average, and recalculate) to compensate for small positive bump visible in your picture before each negative bump...No need for heavy math here, you could not model the reality better then your picture has shown, just make sure that your code detect the end of each and every sample. You have to be fast enough with sample to make sure no negative bump was missed (or you will have big time error in your data averaging).
And you don't need that large arrays, running average is better based on smaller window size, smaller residual error in your case when you detect the negative bump.

Auriotouch, get musical note from frequency FFT

I'm developing a kind of guitar tuner.
I have a function that gives me the FFT, and the values of the FFt for each frequency.
How do I get the musical note from there? Do I have to chose the highest peak?
for(y=0; y<maxY; y++){
CGFloat yFract = (CGFloat)y / (CGFloat)(maxY - 1);
CGFloat fftIdx = yFract * ((CGFloat)fftLength);
double fftIdx_i,fftIdx_f;
fftIdx_f = modf(fftIdx, &fftIdx_i);
SInt8 fft_l, fft_r;
CGFloat fft_l_fl, fft_r_fl;
CGFloat interpVal;
fft_l = (fftData[(int)fftIdx_i] & 0xFF000000) >> 24;
fft_r = (fftData[(int)fftIdx_i + 1] & 0xFF000000) >> 24;
fft_l_fl = (CGFloat)(fft_l + 80) / 64.;
fft_r_fl = (CGFloat)(fft_r + 80) / 64.;
interpVal = fft_l_fl * (1. - fftIdx_f) + fft_r_fl * fftIdx_f;
interpVal = CLAMP(0., interpVal, 1.);
drawBuffers[0][y] = (interpVal * 120);
//NSLog(#"The magnitude for %f Hz is %f.", (yFract * hwSampleRate * .5), (interpVal * 120));
}
Thanks a lot if you can help.
Julien.
This is a non-trivial problem, for several reasons:
The peak may not correspond to the fundamental harmonic (it may even be missing).
The fundamental harmonic will probably not land precisely on the centre of an FFT bin, so its energy will be spread across multiple bins. You would need to do interpolation to estimate the actual frequency.
Unless you perform some kind of windowing, you will get "spectral leakage" effects, which will smear your spectrum all over the place, making it hard to discern details.
I appreciate that this doesn't really answer your question, but it should highlight the fact that this is actually a pretty tricky thing to do well.

Resources