I have conducted 400 experiments and got several data. So I am writing a code that allows me to read all the files using MATLAB, analyze it and store the results in an array. Otherwise I would have to do them individually which would take forever.
Code:
clear
clc
close all
myfolder='C:\Users\Surface\Desktop\New_folder\';
filepattern=fullfile(myfolder,'.txt');
thefiles=dir(filepattern);
for i=1:length(thefiles)
baseFileName=thefiles(i).name;
fullFileName=fullfile(myfolder,baseFileName);
fprintf(1,'Now reading %s\n',fullFileName);
load(fullFileName);
Fs=500; % Samping frequency
for i=1:n
A=Listing(i).name;
[t,X,Xrms(i)]=process_data(Listing(i).name);
end
% Extract the data for the time period of [ts tf]
ts=0;
tf=3;
t=t(ts*Fs+1:tf*Fs+1);
X=X(ts*Fs+1:tf*Fs+1);
end
% FFT analysis to see the natural frequency
fftplot(1/Fs,X);
% Find the damping ratio
[zeta zeta_m]=finddamping(t,X)
I was wondering where I went wrong.
Related
I'm running mcmc on a simulation where I know the true parameter values. Suppose I use 100 walkers and 10,000 steps in the chain. The bulk of the walkers converge to the correct parameter values, but ~3 walkers (out of the 100) just trail off into the distance, not converging. Not sure what could be at play here, other than a potential error on my end.
I've been working on a synthesizer project for the past few weeks, implementing a set of C based numerically controlled oscillators that feed their output to a DAC on an FPGA.
One thing that I tried was to use a lookup table to more efficiently determine the sine values of a given tone. For example, consider the following code:
PhaseArray[NOTE1-1][idx] = ((PhaseArray[NOTE1-1][!idx] + SigmaArray[NOTE1-1]) % (MODULO_CONST) );
....
PhaseDivArray[NOTE1-1][idx] = PhaseArray[NOTE1-1][idx] >> 10;
....
audio = ((iNoteOn[NOTE1-1]) * (SINE_TABLE[PhaseDivArray[NOTE1-1][idx]])) +
....
Now this is the thing that confuses me. I have a fair number of phase accumulators running at the same time without issue. I can get more than a dozen notes to play correctly when I don't bother with the sine lookup and just use an effective square wave signal.
But the second I start using the SINE_TABLE[1024] lookup I have defined (which is a static table full of unsigned 16 bit integer values for the sine curve) the slowdown is immediate to the point where the same micro-controller struggles to produce 3 tones at the right speed for buffered playback.
What is it that causes the lookup table to be so inefficient? Is it something to do with the way the table might be defined in memory?
I am working with big arrays (~6x40million) and my code is showing great bottlenecks. I am experienced programming in MatLab, but don't know much about the inner processes (like memory and such...).
My code looks as follows(Just the essentials, of course all variables are initialized, specially the arrays in loops, I just don't want to bomb you all with code ):
First I read the file,
disp('Point cloud import and subsampling')
tic
fid=fopen(strcat(Name,'.dat'));
C=textscan(fid, '%d%d%f%f%f%d'); %<= Big!
fclose(fid);
then create arrays out of the contents,
y=C{1}(1:Subsampling:end)/Subsampling;
x=C{2}(1:Subsampling:end)/Subsampling;
%... and so on for the other rows
clear C %No one wants 400+ millon doubles just lying around.
And clear the cell array (1), and create some images and arrays with the new values
for i=1:length(x)
PCImage(y(i)+SubSize(1)-maxy+1,x(i)+1-minx)=Reflectanse(i);
PixelCoordinates(y(i)+SubSize(1)-maxy+1,x(i)+1-minx,:)=Coordinates(i,:);
end
toc
Everything runs more or less smoothly until here, but then I manipulate some arrays
disp('Overlap alignment')
tic
PCImage=PCImage(:,[1:maxx/2-Overlap,maxx/2:end-Overlap]); %-30 overlap?
PixelCoordinates=PixelCoordinates(:,[1:maxx/2-Overlap,maxx/2:end-Overlap],:);
Sphere=Sphere(:,[1:maxx/2-Overlap,maxx/2:end-Overlap],:);
toc
and this is a big bottleneck, but it gets worst at the next step
disp('Planar view and point cloud matching')
tic
CompImage=zeros(max(SubSize(1),PCSize(1)),max(SubSize(2),PCSize(2)),3);
CompImage(1:SubSize(1),1:SubSize(2),2)=Subimage; %ExportImage Cyan
CompImage(1:SubSize(1),1:SubSize(2),3)=Subimage;
CompImage(1:PCSize(1),1:PCSize(2),1)=PCImage; %PointCloudImage Red
toc
Output
Point cloud import and subsampling
Elapsed time is 181.157182 seconds.
Overlap alignment
Elapsed time is 408.750932 seconds.
Planar view and point cloud matching
Elapsed time is 719.383807 seconds.
My questions are: will clearing unused objects like C in 1 have any effect? (it doesn't seem like that)
Am I overseeing any other important mechanisms or rules of thumb, or is the whole thing just too much and supposed to happen like this?
When subsref is used, matlab makes a copy of the sub referenced elements. This may be costly for large arrays. Often it will be faster to catenate vectors like
res = [a,b,c];
This is not possible with the current code as written above, but if the code could be modified to make this work, it may save some time.
EDIT
For multi-dimensional arrays you need to use cat
CompImage = cat(dim,Subimage,Subimage,PCImage);
where dim is 3 for this example.
I have a 2 column vector with times and speeds of a subset of data, like so:
5 40
10 37
15 34
20 39
And so on. I want to get the fourier transform of speeds to get a frequency. How would I go about doing this with a fast fourier transform (fft)?
If my vector name is sampleData, I have tried
fft(sampleData);
but that gives me a vector of real and imaginary numbers. To be able to get sensible data to plot, how would I go about doing this?
Fourier Transform will yield a complex vector, when you fft you get a vector of frequencies, each has a spectral phase. These phases can be extremely important! (they contain most of the information of the time-domain signal, you won't see interference effects without them etc...). If you want to plot the power spectrum, you can
plot(abs(fft(sampleData)));
To complete the story, you'll probably need to fftshift, and also produce a frequency vector. Here's a more elaborate code:
% Assuming 'time' is the 1st col, and 'sampleData' is the 2nd col:
N=length(sampleData);
f=window(#hamming,N)';
dt=mean(diff(time));
df=1/(N*dt); % the frequency resolution (df=1/max_T)
if mod(N,2)==0
f_vec= df*((1:N)-1-N/2); % frequency vector for EVEN length vector
else
f_vec= df*((1:N)-0.5-N/2);
end
fft_data= fftshift(fft(fftshift(sampleData.*f))) ;
plot(f_vec,abs(fft_data))
I would recommend that you back up and think about what you are trying to accomplish, and whether an FFT is an appropriate tool for your situation. You say that you "want to ... get a frequency", but what exactly do you mean by that? Do you know that this data has exactly one frequency component, and want to know what the frequency is? Do you want to know both the frequency and phase of the component? Do you just want to get a rough idea of how many discrete frequency components are present? Are you interested in the spectrum of the noise in your measurement? There are many questions you can ask about "frequencies" in a data set, and whether or not an FFT and/or power spectrum is the best approach to getting an answer depends on the question.
In a comment above you asked "Is there some way to correlate the power spectrum to the time values?" This strikes me as a confused question, but also makes me think that maybe the question you are really trying to answer is "I have a signal whose frequency varies with time, and I want to get an estimate of the frequency vs time". I'm sure I've seen a question along those lines within the past few months here on SO, so I would search for that.
Okay, this a bit of maths and DSP question.
Let us say I have 20,000 samples which I want to resample at a different pitch. Twice the normal rate for example. Using an Interpolate cubic method found here I would set my new array index values by multiplying the i variable in an iteration by the new pitch (in this case 2.0). This would also set my new array of samples to total 10,000. As the interpolation is going double the speed it only needs half the amount of time to finish.
But what if I want my pitch to vary throughout the recording? Basically I would like it to slowly increase from a normal rate to 8 times faster (at the 10,000 sample mark) and then back to 1.0. It would be an arc. My questions are this:
How do I calculate how many samples would the final audio track be?
How to create an array of pitch values that would represent this increase from 1.0 to 8.0 back to 1.0
Mind you this is not for live audio output, but for transforming recorded sound. I mainly work in C, but I don't know if that is relevant.
I know this probably is complicated, so please feel free to ask for clarifications.
To represent an increase from 1.0 to 8.0 and back, you could use a function of this form:
f(x) = 1 + 7/2*(1 - cos(2*pi*x/y))
Where y is the number of samples in the resulting track.
It will start at 1 for x=0, increase to 8 for x=y/2, then decrease back to 1 for x=y.
Here's what it looks like for y=10:
Now we need to find the value of y depending on z, the original number of samples (20,000 in this case but let's be general). For this we solve integral 1+7/2 (1-cos(2 pi x/y)) dx from 0 to y = z. The solution is y = 2*z/9 = z/4.5, nice and simple :)
Therefore, for an input with 20,000 samples, you'll get 4,444 samples in the output.
Finally, instead of multiplying the output index by the pitch value, you can access the original samples like this: output[i] = input[g(i)], where g is the integral of the above function f:
g(x) = (9*x)/2-(7*y*sin((2*pi*x)/y))/(4*pi)
For y=4444, it looks like this:
In order not to end up with aliasing in the result, you will also need to low pass filter before or during interpolation using either a filter with a variable transition frequency lower than half the local sample rate, or with a fixed cutoff frequency more than 16X lower than the current sample rate (for an 8X peak pitch increase). This will require a more sophisticated interpolator than a cubic spline. For best results, you might want to try a variable width windowed sinc kernel interpolator.