Have to say surprised with the limitations of labVIEW FPGA in array implementation. I am not experienced enough in labview to make this comment but I found it very difficult to work with arrays !!.
I am working on Active Noise Cancellation project. I need to collect audio data from two microphones #40k Sample rate and 100 samples per frame (at least) and output the audio through loud speaker #40k sample rate. For this purpose, I am using myRIO 1900 in FPGA High throughput personality.
Right now, I am trying to implement LMS algorithm in LabVIEW FPGA platform. I have attached the MATLAB code below, what I want to implement !
Till array intializations are fine but when it comes to temporary vector manipulations. like :
x_temp(1:nr_c-1)=x_temp(2:nr_c);
x_temp(nr_c)=x(i);
I am going mad ! How to do this ? LabVIEW doesnt allow us to index the 1D arrays properly ( we can only extract one particular element of array not part of array, I can extract x_temp(1:nr_c) with sub array function but how to extract x_temp(2:nr_c) ??)
Please help me how to do the manipulation of 'x_temp' vector with the above basic statements.
PS: 1.LabVIEW FPGA supports only one dimensional array operations.
Even though Adaptive Filter Toolkit pallettes are available in myRIO FPGA High throughput personality, I cannot install Adaptive Filter Toolkit on myRIO Target.!!!
3.In MyLMS program : x : input vector(Array) , y: Desired Signal, nr_c : number of filter coefficients ,step: step size. I hope you understood the rest program :)
MyLMS.m :
function [y_hat,e,w] = MyLms(step, nr_c, x, y)
%intializing all vectors
coeffs=zeros(1,nr_c);
x_temp=zeros(1,nr_c);
y_hat=zeros(length(x),1);
e=zeros(length(x),1);
for i=1:length(x)
%temporary vector formation
x_temp(1:nr_c-1)=x_temp(2:nr_c);
x_temp(nr_c)=x(i);
%LMS algorithm iterations
y_hat(i) = x_temp * coeffs';
e(i)= y(i) - y_hat(i);
coeffs = coeffs+step*e(i)*x_temp;
end
I am by no means a LabVIEW FGPA expert, but I think you would encounter very similar limitations in VHDL.
At any rate, the typical technique for accessing subarrays is using rotate and replace subset (see Joel_G's post). For more information about the FPGA constraints and capabilities, see the help document: Array Palette Details (FPGA Module).
Related
As an OpenCASCADE newbie, I am reading the OpenCASCADE tutorial:
https://www.opencascade.com/doc/occt-7.4.0/overview/html/occt__tutorial.html
There are following two curious calls:
BRepLib::BuildCurves3d(threadingWire1);
BRepLib::BuildCurves3d(threadingWire2);
The tutorial explains the need for these two calls in this way:
Remember that these wires were built out of a surface and 2D curves. One important data item is missing as far as these wires are concerned: there is no information on the 3D curves. Fortunately, you do not need to compute this yourself, which can be a difficult task since the mathematics can be quite complex. When a shape contains all the necessary information except 3D curves, Open CASCADE Technology provides a tool to build them automatically. In the BRepLib tool package, you can use the BuildCurves3d method to compute 3D curves for all the edges of a shape.
which I did not find entirely clear.
Imagine that I have constructed some TopoDS_Shape object.
How can I, in general, figure out whether BRepLib::BuildCurves3d call is necessary or not?
With this code you can get the 3D curve of an edge (take from BRepExtrema_DistanceSS.cxx):
Standard_Real aFirst, aLast;
Handle(Geom_Curve) pCurv = BRep_Tool::Curve(E, aFirst, aLast);
If you have not created the 3D curves, pCurv will be a null handle. Using it will result in segmentation faults.
I have been excited about where the 3D curves are actually used. Therefore I have tried several algorithms. These are the algorithms I have tried where the 3D curves are not used:
Visualizing
Export to BREP
Export to STEP
Length Measurement
Checking Whether a Wire Is Closed or Ordered
The only algorithm I have found where the 3D curves are used are extrema/distance computations with BRepExtrema_DistShapeShape. You will not be able to use this class if you have not created the 3D curves.
Using Windows API, I want to implement something like following:
i.e. Getting current microphone input level.
I am not allowed to use external audio libraries, but I can use Windows libraries. So I tried using waveIn functions, but I do not know how to process audio input data in real time.
This is the method I am currently using:
Record for 100 milliseconds
Select highest value from the recorded data buffer
Repeat forever
But I think this is way too hacky, and not a recommended way. How can I do this properly?
Having built a tuning wizard for a very dated, but well known, A/V conferencing applicaiton, what you describe is nearly identical to what I did.
A few considerations:
Enqueue 5 to 10 of those 100ms buffers into the audio device via waveInAddBuffer. IIRC, when the waveIn queue goes empty, weird things happen. Then as the waveInProc callbacks occurs, search for the sample with the highest absolute value in the completed buffer as you describe. Then plot that onto your visualization. Requeue the completed buffers.
It might seem obvious to map the sample value as follows onto your visualization linearly.
For example, to plot a 16-bit sample
// convert sample magnitude from 0..32768 to 0..N
length = (sample * N) / 32768;
DrawLine(length);
But then when you speak into the microphone, that visualization won't seem as "active" or "vibrant".
But a better approach would be to give more strength to those lower energy samples. Easy way to do this is to replot along the μ-law curve (or use a table lookup).
length = (sample * N) / 32768;
length = log(1+length)/log(N);
length = max(length,N)
DrawLine(length);
You can tweak the above approach to whatever looks good.
Instead of computing the values yourself, you can rely on values from Windows. This is actually the values displayed in your screenshot from the Windows Settings.
See the following sample for the IAudioMeterInformation interface:
https://learn.microsoft.com/en-us/windows/win32/coreaudio/peak-meters.
It is made for the playback but you can use it for capture also.
Some remarks, if you open the IAudioMeterInformation for a microphone but no application opened a stream from this microphone, then the level will be 0.
It means that while you want to display your microphone peak meter, you will need to open a microphone stream, like you already did.
Also read the documentation about IAudioMeterInformation it may not be what you need as it is the peak value. It depends on what you want to do with it.
This is my first post on SO. I haven't already developed much code for embedded systems, but I have few problems and need help from more advanced programmers. I use following devices:
- LandTiger board with LPC1768 (Cortex M3) MCU,
- Digilent pmodACL with ADXL345 accelerometer (3 axis),
- Digilent pmodGYRO with L3G4200D gyroscope (3 axis).
I would like to get some information about device orientation, i.e. rotation angles over X, Y and Z axes. I've read that in order to achieve this I need to combine data from both accelerometer and gyroscope using Kallman filter or its simpler form i.e. complementary filter. I would like to know if it's possible to count roll, pitch and yaw from full range (0-360 degrees) using measurment data only from gyroscope and accelerometer (without magnetometer). I've also found some mathematical formulas (http://www.ewerksinc.com/refdocs/Tilt%20Sensing%20with%20LA.pdf and http://www.freescale.com/files/sensors/doc/app_note/AN3461.pdf) but they contain root squares in numerators/denominators so the information about proper quadrant of coordinate system is lost.
The question you are asking is a fairly frequent one, and is fairly complex, with many different solutions.
Although the title mentions only an accelerometer, the body of your post mentions a gyroscope, so I will assume you have both. In addition there are many steps to getting low-cost accelerometers and gyros to work, one of those is to do the voltage-to-measurement conversion. I will not cover that part.
First and foremost I will answer your question.
You asked if by 'counting' the gyro measurements you can estimate the attitude (orientation) of the device in Euler Angles.
Short answer: Yes, you could sum the scaled gyro measurements to get a very noisy estimate of the device rotation (actual radians turned, not attitude), however it would fail as soon as you rotate more than one axis. This will not suffice for most applications.
I will provide you with some general advise, specific knowledge and some example code that I have used before.
Firstly, you should not try to solve this problem by writing a program and testing with your IMU. You should start by writing a simulation using validated libraries, then validate your algorithm/program, and only then try to implement it with the IMU.
Secondly, you say you want to "count roll, pitch and yaw from full range (0-360 degrees)".
From this I assume you mean you want to be able to determine the Euler Angles that represent the attitude of the device with respect to an external stationary North-East-Down (NED) frame.
Your statement makes me think you are not familiar with representations of attitude, because as far as I know there are no Euler Angle representations with all 3 angles in the 0-360 range.
The application for which you want to use the attitude of the device will be important. If you are using Euler Angles you will not be able to accurately track the attitude of the device when large (greater than around 50 degrees) rotations are made on the roll or pitch axes, due to what is known as Gimbal Lock.
If you require the tracking of such motions then you will need to use a quaternion or Direction Cosine Matrix (DCM) representation of attitude.
Thirdly, as you have said you can use a Complimentary Filter or Kalman Filter variant (Extended Kalman Filter, Error-State Kalman Filter, Indirect Kalman Filter) to accurately track the attitude of the device by fusing the data from the accelerometer, gyro and a magnetometer. I suggest the Complimentary Filter described by Madgwick which is implemented in C, C# and MATLAB here. A Kalman Filter variant would be necessary if you wanted to track the position of the device, and had an additional sensor such as GPS.
For some example code of mine using accelerometer only to get Euler Angle pitch and roll see my answer to this other question.
I'm working on a audio processor, which is supposed to change the pitch and add a vibrato to a song while it is playing. Note that, although the sound isn't live (e.g. from a microphone) the effects that I want to add are to be made in real time.
I found that the best way to approach it would be dividing the soundfile into small elements and apply the effects on each one by order.
So I wrote this to divide the audio file:
%Load Sound File
[fsample Fs] = wavread ('C:\Users\Ogulcan\Desktop\Bitirme Projesi\Codes\kravitz.wav');
%length of the sample
t=length(fsample);
%number of samples
ns=10;
%Defining the array:
A=[];
%Create the vectors and place them into the array 'A':
for i=1:ns-1
v=fsample(i*t/ns:(i+1)*t/ns);
A=[A;v];
end
This code works and divides the sound into 10 samples, however when I try to play them in a loop there is a small but noticable delay. Now I plan on doing this for a lot bigger sample numbers.
Can anyone help me with this speed issue? I really don't know any other language than MATLAB or have the required software, so I would appreciate if you could show me a way to do this in MATLAB.
You can replace the entire loop with the following simpler snippet:
A = fsample(1:end-rem(length(fsample),ns)); % ensure data has ns full samples
A = reshape(A,[],ns)';
I recently asked this question:
I am looking for an algorithm to detect pitch. one of the answers suggested that I use an initial FFT to get the basic frequency response, figure out which frequencies are getting voiced, and follow it up with a band pass filter in each area of interest:
A slightly advanced algorithm could do something like this:
Roughly detect pitch frequency (could be done with DFT).
Bandpass signal to filter isolate pitch frequency.
Count the number of samples between two peaks in the filtered signals.
Now I can do the first step okay ( I am coding for iOS, and Apple has a framework (the accelerate framework) for doing FFTs etc.
I have made a start here: but I can see the problem: an FFT that would differentiate all of the possible notes one could sing would require a lot of samples, and I don't want to perform too much unnecessary computation as I'm targeting a mobile device.
So I'm trying to get my head round this answer above, but I don't understand how I could apply the concept of a band pass filter to code.
Can anyone help?
Filter design is pretty complex. There are many techniques. First you have to decide what kind of filter you want to create. Finite impulse response (FIR)? Infinite impulse response (IIR)? Then you select an algorithm for designing a filter of that type. The Remez algorithm is often used for FIR filter design. Go here to see the complexity that I was referring to: http://en.wikipedia.org/wiki/Remez_algorithm
Your best best for creating a filter is to use an existing signal processing library. A quick Google search led me here: http://spuc.sourceforge.net/
Given what your application is, you may want to read about matched filters. I am not sure if they are relevant here, but they might be. http://en.wikipedia.org/wiki/Matched_filter
well in Wikipedia, checkup on low-pass filter, and hi-pass, then join them to make a band-pass filter. Wikipedia has code implementations for those two filters.
http://en.wikipedia.org/wiki/Low-pass_filter
http://en.wikipedia.org/wiki/High-pass_filter
Since you only want to detect a single frequency, it would be an overkill to perform a DFT to then only use one of the values.
You could implement the Goertzel algorithm. Like this C implementation used to detect DTMF tones over a phone line, from the FreePBX source code:
float goertzel(short x[], int nmax, float coeff) {
float s, power;
float sprev, sprev2;
int n;
sprev = 0;
sprev2 = 0;
for(n=0; n<nmax; n++) {
s = x[n] + coeff * sprev - sprev2;
sprev2 = sprev;
sprev = s;
}
power = sprev2*sprev2 + sprev*sprev - coeff*sprev*sprev2;
return power;
}
As you can see, the implementation is fairly trivial and quite effective for single frequencies. Check the link for different versions with and without floating point, and how to use it.