I am going to work on uncertainty visualization. My main problem is finding/generating a 2D/3D/n-dimensional data set with uncertain data.
How do I can generate/create a data set which includes uncertain data (with and/or without label)? Is there any benchmarking data set?
After my hardly working and searching, I could find some results:
Actually, there is no a benchmarking data set with uncertainties features. One solution is adding noise to the original data set to make uncertainties due to affection of the noise.
The ideal way is application of White Gaussian Noise. Two ways are as follows:
(1) MATLAB can support this issue with the function wgn.
(2) using randn function from MATLAB.
(3) my suggestion is using Mean * randn(n,1) + Standard Deviation, which add noise in your data set with your preferred mean and Std. (Standard Deviation)
I hope that my recommendation being useful.
Related
Kalman filters and quaternions are something new for me.
I have a sensor which output voltage on its pins changes in function of its inclination on x,y and/or z-axis, i.e. an angle sensor.
My questions:
Is it possible to apply a Kalman filter to smooth the results and avoid any noise on the measurements?
I will then only have 1 single 3D vector. What kind of operations with quaternions could I use with this 3d vector to learn more about quaternions?
You can apply a Kalman filter to accelerometer data, it's a powerful technique though and there are lots of ways to do it wrong. If your goal is to learn about the filter then go for it - the discussion here might be helpful.
If you just want to smooth the data and get on with the next problem then you might want to start with a moving average filter, or traditional lowpass/bandpass filters.
After applying a Kalman filter you will still have a sequence of data - it won't reduce it to a single vector. If this is your goal you might as well take the mean of each coordinate sequence.
As for quaternions, you could probably come up with a way of performing quaternion operations on your accelerometer data but the challenge would be to make it meaningful. For the purposes of learning about the concept you really need it to make some sense, so that you can visualise the results and interpret them.
I would be tempted to write some functions to implement quaternion operations instead - multiplication is the strange one. This will be a good introduction to the way they work, and then when you find an application that calls for them you can use your functions and you'll already know how the mechanics work.
If you want to read the most famous use of quaternions have a look at Maxwell's equations in their original quaternion form, before Heaviside dramatically simplified them and put them in the vector notation we use today.
Also a lot of work is done using tensors these days and if you're interested in the more complex mathematical datatypes that would be a worthwhile one to look into.
What is a simple way to see if my low-pass filter is working? I'm in the process of designing a low-pass filter and would like to run tests on it in a relatively straight forward manner.
Presently I open up a WAV file and stick all the samples in a array of ints. I then run the array through the low-pass filter to create a new array. What would an easy way to check if the low-pass filter worked?
All of this is done in C.
You can use a broadband signal such as white noise to measure the frequency response:
generate white noise input signal
pass white noise signal through filter
take FFT of output from filter
compute log magnitude of FFT
plot log magnitude
Rather than coding this all up you can just dump the output from the filter to a text file and then do the analysis in e.g. MATLAB or Octave (hint: use periodogram).
Depends on what you want to test. I'm not a DSP expert, but I know there are different things one could measure about your filter (if that's what you mean by testing).
If the filter is linear then all information of the filter can be found in the impulse response. Read about it here: http://en.wikipedia.org/wiki/Linear_filter
E.g. if you take the Fourier transform of the impulse response, you'll get the frequency response. The frequency response easily tells you if the low-pass filter is worth it's name.
Maybe I underestimate your knowledge about DSP, but I recommend you to read the book on this website: http://www.dspguide.com. It's a very accessible book without difficult math. It's available as a real book, but you can also read it online for free.
EDIT: After reading it I'm convinced that every programmer that ever touches an ADC should definitely have read this book first. I discovered that I did a lot of things the difficult way in past projects that I could have done a thousand times better when I had a little bit more knowledge about DSP. Most of the times an unexperienced programmer is doing DSP without knowing it.
Create two monotone signals, one of a low frequency and one of a high frequency. Then run your filter on the two. If it works, then the low frequency signal should be unmodified whereas the high frequency signal will be filtered out.
Like Bart above mentioned.
If it's LTI system, I would insert impulse and record the samples and perform FFT using matlab and plot magnitude.
You ask why?
In time domain, you have to convolute the input x(t) with the impulse response d(t) to get the transfer function which is tedious.
y(t) = x(t) * d(t)
In frequency domain, convolution becomes simple multiplication.
y(s) = x(s) x d(s)
So, transfer function is y(s)/x(s) = d(s).
That's the reason you take FFT of impulse response to see the behavior of the filter.
You should be able to programmatically generate tones (sine waves) of various frequencies, stuff them into the input array, and then compare the signal energy by summing the squared values of the arrays (and dividing by the length, though that's mathematically not necessary here because the signals should be the same length). The ratio of the output energy to the input energy gives you the filter gain. If your LPF is working correctly, the gain should be close to 1 for low frequencies, close to 0.5 at the bandwidth frequency, and close to zero for high frequencies.
A note: There are various (but essentially the same in spirit) definitions of "bandwidth" and "gain". The method I've suggested should be relatively insensitive to the transient response of the filter because it's essentially averaging the intensity of the signal, though you could improve it by ignoring the first T samples of the input, where T is related to the filter bandwidth. Either way, make sure that the signals are long compared to the inverse of the filter bandwidth.
When I check a digital filter, I compute the magnitude response graph for the filter and plot it. I then generate a linear sweeping sine wave in code or using Audacity, and pass the sweeping sine wave through the filter (taking into account that things might get louder, so the sine wave is quiet enough not to clip) . A visual check is usually enough to assert that the filter is doing what I think it should. If you don't know how to compute the magnitude response I suspect there are tools out there that will compute it for you.
Depending on how certain you want to be, you don't even have to do that. You can just process the linear sweep and see that it attenuated the higher frequencies.
Nominally a good problem to have, but I'm pretty sure it is because something funny is going on...
As context, I'm working on a problem in the facial expression/recognition space, so getting 100% accuracy seems incredibly implausible (not that it would be plausible in most applications...). I'm guessing there is either some consistent bias in the data set that it making it overly easy for an SVM to pull out the answer, =or=, more likely, I've done something wrong on the SVM side.
I'm looking for suggestions to help understand what is going on--is it me (=my usage of LibSVM)? Or is it the data?
The details:
About ~2500 labeled data vectors/instances (transformed video frames of individuals--<20 individual persons total), binary classification problem. ~900 features/instance. Unbalanced data set at about a 1:4 ratio.
Ran subset.py to separate the data into test (500 instances) and train (remaining).
Ran "svm-train -t 0 ". (Note: apparently no need for '-w1 1 -w-1 4'...)
Ran svm-predict on the test file. Accuracy=100%!
Things tried:
Checked about 10 times over that I'm not training & testing on the same data files, through some inadvertent command-line argument error
re-ran subset.py (even with -s 1) multiple times and did train/test only multiple different data sets (in case I randomly upon the most magical train/test pa
ran a simple diff-like check to confirm that the test file is not a subset of the training data
svm-scale on the data has no effect on accuracy (accuracy=100%). (Although the number of support vectors does drop from nSV=127, bSV=64 to nBSV=72, bSV=0.)
((weird)) using the default RBF kernel (vice linear -- i.e., removing '-t 0') results in accuracy going to garbage(?!)
(sanity check) running svm-predict using a model trained on a scaled data set against an unscaled data set results in accuracy = 80% (i.e., it always guesses the dominant class). This is strictly a sanity check to make sure that somehow svm-predict is nominally acting right on my machine.
Tentative conclusion?:
Something with the data is wacked--somehow, within the data set, there is a subtle, experimenter-driven effect that the SVM is picking up on.
(This doesn't, on first pass, explain why the RBF kernel gives garbage results, however.)
Would greatly appreciate any suggestions on a) how to fix my usage of LibSVM (if that is actually the problem) or b) determine what subtle experimenter-bias in the data LibSVM is picking up on.
Two other ideas:
Make sure you're not training and testing on the same data. This sounds kind of dumb, but in computer vision applications you should take care that: make sure you're not repeating data (say two frames of the same video fall on different folds), you're not training and testing on the same individual, etc. It is more subtle than it sounds.
Make sure you search for gamma and C parameters for the RBF kernel. There are good theoretical (asymptotic) results that justify that a linear classifier is just a degenerate RBF classifier. So you should just look for a good (C, gamma) pair.
Notwithstanding that the devil is in the details, here are three simple tests you could try:
Quickie (~2 minutes): Run the data through a decision tree algorithm. This is available in Matlab via classregtree, or you can load into R and use rpart. This could tell you if one or just a few features happen to give a perfect separation.
Not-so-quickie (~10-60 minutes, depending on your infrastructure): Iteratively split the features (i.e. from 900 to 2 sets of 450), train, and test. If one of the subsets gives you perfect classification, split it again. It would take fewer than 10 such splits to find out where the problem variables are. If it happens to "break" with many variables remaining (or even in the first split), select a different random subset of features, shave off fewer variables at a time, etc. It can't possibly need all 900 to split the data.
Deeper analysis (minutes to several hours): try permutations of labels. If you can permute all of them and still get perfect separation, you have some problem in your train/test setup. If you select increasingly larger subsets to permute (or, if going in the other direction, to leave static), you can see where you begin to lose separability. Alternatively, consider decreasing your training set size and if you get separability even with a very small training set, then something is weird.
Method #1 is fast & should be insightful. There are some other methods I could recommend, but #1 and #2 are easy and it would be odd if they don't give any insights.
I have two images of real world. (IMPORTANT)I approximately know transformation of one real world to another. Due to texture problem I don't get enough matches between two images. How can I bring transformation information into account to get more and correct matches by using SIFt.
Any idea will be helpful.
Have you tried other alternatives? Are you sure SIFT is the answer? First, OpenCV provides SIFT, among other tools. (At the moment, I can't speak highly enough of OpenCV).
If I were solving this problem, I would first try:
Downsample your two images to reduce the influence of "texture", i.e. cvPyrDown.
Perform some feature detection: edge detection, etc. OpenCV provides a Harris corner detector, among others. Google "cvGoodFeaturesToTrack" for some detail.
If you have good confidence in your transformations, take advantage of your a priori information and look for features in neighborhoods corresponding to the transformed locations.
If you still want to look at SIFT or SURF, OpenCV provides those capabilities, as well.
If you know the transform, then apply the transform and then apply SURF/SIFT to the transformed image. That's one standard way to extend the robustness of feature descriptors/matchers across large perspective changes.
There is another alternative:
In sift parameters, Contrast Threshold is set to 0.04. If you reduce it and set it to a lower value ( 0.02,0.01) SIFT would find more enough matches:
SIFT(int nfeatures=0, int nOctaveLayers=3, double contrastThreshold=0.04, double edgeThreshold=10, double sigma=1.6)
The first step I think is to try with the settings of the SIFT algorithm to find the best efficiency with respect to your problem.
One another way to use SIFT more effectively is adding the COLOR information to SIFT. So you can add the color information (RGB) of the points which are being used in the descriptor to it. For instance if your descriptor size is 10x128 then it shows that you are using 10 points in each descriptor. Now you can extract and add three column and make the size 10x(128+3) [R-G-B for each point]. In this way the SIFT algorithm will work more efficient. But remember, you need to apply weight to your descriptor and make the last three columns be stronger than the other 128 columns. Actually I do not know in your case how the images are. but this method helped me a lot. and you can see that this modification makes SIFT a stronger method than before.
A similar implementation can be find here.
All the examples I have seen of neural networks are for a fixed set of inputs which works well for images and fixed length data. How do you deal with variable length data such sentences, queries or source code? Is there a way to encode variable length data into fixed length inputs and still get the generalization properties of neural networks?
I have been there, and I faced this problem.
The ANN was made for fixed feature vector length, and so are many other classifiers such as KNN, SVM, Bayesian, etc.
i.e. the input layer should be well defined and not varied, this is a design problem.
However, some researchers opt for adding zeros to fill the missing gap, I personally think that this is not a good solution because those zeros (unreal values) will affect the weights that the net will converge to. in addition there might be a real signal ending with zeros.
ANN is not the only classifier, there are more and even better such as the random forest. this classifier is considered the best among researchers, it uses a small number of random features, creating hundreds of decision trees using bootstrapping an bagging, this might work well, the number of the chosen features normally the sqrt of the feature vector size. those features are random. each decision tree converges to a solution, using majority rules the most likely class will chosen then.
Another solution is to use the dynamic time warping DTW, or even better to use Hidden Markov models HMM.
Another solution is the interpolation, interpolate (compensate for missing values along the small signal) all the small signals to be with the same size as the max signal, interpolation methods include and not limited to averaging, B-spline, cubic.....
Another solution is to use feature extraction method to use the best features (the most distinctive), this time make them fixed size, those method include PCA, LDA, etc.
another solution is to use feature selection (normally after feature extraction) an easy way to select the best features that give the best accuracy.
that's all for now, if non of those worked for you, please contact me.
You would usually extract features from the data and feed those to the network. It is not advisable to take just some data and feed it to net. In practice, pre-processing and choosing the right features will decide over your success and the performance of the neural net. Unfortunately, IMHO it takes experience to develop a sense for that and it's nothing one can learn from a book.
Summing up: "Garbage in, garbage out"
Some problems could be solved by a recurrent neural network.
For example, it is good for calculating parity over a sequence of inputs.
The recurrent neural network for calculating parity would have just one input feature.
The bits could be fed into it over time. Its output is also fed back to the hidden layer.
That allows to learn the parity with just two hidden units.
A normal feed-forward two-layer neural network would require 2**sequence_length hidden units to represent the parity. This limitation holds for any architecture with just 2 layers (e.g., SVM).
I guess one way to do it is to add a temporal component to the input (recurrent neural net) and stream the input to the net a chunk at a time (basically creating the neural network equivalent of a lexer and parser) this would allow the input to be quite large but would have the disadvantage that there would not necessarily be a stop symbol to seperate different sequences of input from each other (the equivalent of a period in sentances)
To use a neural net on images of different sizes, the images themselves are often cropped and up or down scaled to better fit the input of the network. I know that doesn't really answer your question but perhaps something similar would be possible with other types of input, using some sort of transformation function on the input?
i'm not entirely sure, but I'd say, use the maximum number of inputs (e.g. for words, lets say no word will be longer than 45 characters (longest word found in a dictionary according to wikipedia), and if a shorter word is encountered, set the other inputs to a whitespace character.
Or with binary data, set it to 0. the only problem with this approach is if an input filled with whitespace characters/zeros/whatever collides with a valid full length input (not so much a problem with words as it is with numbers).