How to compute Middlebury stereo benchmark v3 statistical erros A50, A90, and A95? - benchmarking

I have tested my Stereo Matching Algorithm on the Middlebury stereo evaluation v3 training. I would like to understand the meaning of the following metrics in the online table and how can I compute them: A50, A90, and A95 ?
Here is the link for the online table https://vision.middlebury.edu/stereo/eval3/

Related

Random Forest/SVM in time series classification

I have a classification task and tried some feature extraction(like mean, and std). the overall accuracy using random forest was around 85% so that I though of finding better algorithms. Is it possible to extract the curve function formula produced by random forest in way that every test sample just be checked by the curve(line)? how about SVM as it produces a line(or a curve) which would separate the data to do classification.(Is there any other classification algorithms that let us do this process?--> extracting the final curve function like in curve fitting)
I have done the task in scikit-learn but want to transfer the model in C, so any recommendation based on C would be of high priority.

Data mining and weka

Hi ive beeen asked to search for at least 20 different datasets with a maximum of 40 datasets. i need to apply the following classification techniques using the WEKA software on the chosen datasets:
(1) Decision tree (SimpleCart),
(2) Naïve Bayes, and
(3) K-NN (IBk) (with K taking the value of 1 up to the number of class labels in the dataset)
Once you have applied WEKA on all the datasets, it is required to accomplish the following tasks:
Compare the performance of the applied techniques you have achieved through WEKA.
Analyse the results with regards to the dataset properties.
Ive never used weka before,am unsure on how to apply the classification techniques and what am actually comparing, but am quick at learning.Am not really about what am required to do...i just need some direction or some example please anyone?
To find dataset, you can use
https://archive.ics.uci.edu/ml/datasets.html
To compare the performance of classifier, there are many measures like AUC (Area Under Curve), ROC curve, Accuracy, precision and recall. Weka has the ability to generate these measures. I recommend to use AUC and Accuracy.
To learn how to use Weka, there are many online tutorials like http://www.ibm.com/developerworks/library/os-weka2/

Finding a handwritten dataset with an already extracted features

I want to test my clustering algorithms on data of handwritten text, so I'm searching for a dataset of handwritten text (e.g. words) with already extracted features (the goal is to test my clustering algorithms on, not to extract features). Does anyone have any information on that ?
Thanks.
There is a dataset of images of handwritten digits : http://yann.lecun.com/exdb/mnist/ .
Texmex has 128d SIFT vectors
"to evaluate the quality of approximate
nearest neighbors search algorithm on different kinds of data and varying database sizes",
but I don't know what their images are of; you could try asking the authors.

Algorithm for voice comparison

Given two recorded voices in digital format, is there an algorithm to compare the two and return a coefficient of similarity?
I recommend to take a look into the HTK toolkit for speech recognition http://htk.eng.cam.ac.uk/, especially the part on feature extraction.
Features that I would assume to be good indicators:
Mel-Cepstrum coefficients (general timbre)
LPC (for the harmonics)
Given your clarification I think what you are looking for falls under speech recognition algorithms.
Even though you are only looking for the measure of similarity and not trying to turn speech into text, still the concepts are the same and I would not be surprised if a large part of the algorithms would be quite useful.
However, you will have to define this coefficient of similarity more formally and precisely to get anywhere.
EDIT:
I believe speech recognition algorithms would be useful because they do abstraction of the sound and comparison to some known forms. Conceptually this might not be that different from taking two recordings, abstracting them and comparing them.
From wikipedia article on HMM
"In speech recognition, the hidden
Markov model would output a sequence
of n-dimensional real-valued vectors
(with n being a small integer, such as
10), outputting one of these every 10
milliseconds. The vectors would
consist of cepstral coefficients,
which are obtained by taking a Fourier
transform of a short time window of
speech and decorrelating the spectrum
using a cosine transform, then taking
the first (most significant)
coefficients."
So if you run such an algorithm on both recordings you would end up with coefficients that represent the recordings and it might be far easier to measure and establish similarities between the two.
But again now you come to the question of defining the 'similarity coefficient' and introducing dogs and horses did not really help.
(Well it does a bit, but in terms of evaluating algorithms and choosing one over another, you will have to do better).
There are many different algorithms - the general name for this task is Speaker Identification - start with this Wikipedia page and work from there: http://en.wikipedia.org/wiki/Speaker_recognition
I'm not sure this will work for soundfiles, but it gives you an idea how to proceed i hope. That is a basic way how to find a pattern (image) in another image.
You first have to calculate the fft of both the soundfiles and then do a correlation. In formular it would look like (pseudocode):
fftSoundFile1 = fft(soundFile1);
fftConjSoundFile2 = conj(fft(soundFile2));
result_corr = real(ifft(soundFile1.*soundFile2));
Where fft= fast Fourier transform, ifft = inverse, conj = conjugate complex.
The fft is performed on the sample values of the soundfiles.
The peaks in the result_corr vector will then give you the positions of high correlation.
Note that both soundfiles must in this case be of the same size-otherwise you have to place the shorter one into a file of max(soundFileLength) vector.
Regards
Edit: .* means (in matlab style) a component wise mult, you must not do a vector mult!
Next Edit: Note that you have to operate with complex numbers - but there are several Complex classes out there so I think you don't have to bother about this.

How to extract frequency information from samples from PortAudio using FFTW in C

I want to make a program that would record audio data using PortAudio (I have this part done) and then display the frequency information of that recorded audio (for now, I'd like to display the average frequency of each of the group of samples as they come in).
From some research I've done, I know that I need to do an FFT. So I googled for a library to do that, in C, and found FFTW.
However, now I am a little lost. What exactly am I supposed to do with the samples I recorded to extract some frequency information from them? What kind of FFT should I use (I assume I'd need a real data 1D?)?
And once I'd do the FFT, how do I get the frequency information from the data it gives me?
EDIT : I now found also the autocorrelation algorithm. Is it better? Simpler?
Thanks a lot in advance, and sorry, I have absolutely no experience if this. I hope it makes at least a little sense.
To convert your audio samples to a power spectrum:
if your audio data is integer data then convert it to floating point
pick an FFT size (e.g. N=1024)
apply a window function to N samples of your data (e.g. Hanning)
use a real-to-complex FFT of size N to generate frequency domain data
calculate the magnitude of your complex frequency domain data (magnitude = sqrt(re^2 + im^2))
optionally convert magnitude to a log scale (dB) (magnitude_dB = 20*log10(magnitude))

Resources