I was in an interview when I was asked about describing the application of Fibonacci series.
I knew Fibonacci series are used in some sort of benchmarking but I could not come up with a real software/computer application . I tried researching about it . I found they are used in something called Fibonacci heaps But I could not find any obvious computer science application.
Please give your valuable suggestions.
One the most recognizable application is finding extremums for a given function.
Imagine that you have a function (e.g. y = x^2) and you would like to find its minimum. During this procedure you iteratively reduce the range of values, which contain an extremum.
I would suggest reading wiki. Variant of this algorithm is based on Fibonacci sequnce and it's called Fibonacci search.
In addition Fibonacci sequence is used in modelling populations and/or growth of something, e.g. check this article
Finally, there is an article
in signal processing that introduces a link between Kalman filters and Fibonacci sequences.
Related
I'm relatively new to Neural Networks.
Atm I am trying to program a Neural Network for simple image recognition of numbers between 0 and 10.
The activation function I'm aiming for is ReLU (rectified linear unit).
With the sigmoid-function it is pretty clear how you can determine a probability for a certain case in the end (because its between 0 and 1).
But as far as I understand it, with the ReLU we don't have these limitations, but can get any value as a sum of previous "neurons" in the end.
So how is this commonly solved?
Do I just take the biggest of all values and say thats probability 100%?
Do I sum up all values and say thats the 100%?
Or is there another aproach I can't see atm?
I hope my question is understandable.
Thanks in advance for taking the time, looking at my question.
You can't use ReLU function as the output function for classification tasks because, as you mentioned, its range can't represent probability 0 to 1. That's why it is used only for regression tasks and hidden layers.
For binary classification, you have to use output function with range between 0 to 1 such as sigmoid. In your case, you would need a multidimensional extension such as softmax function.
I am studying about Bayesian Network of my AI courses.
Does anyone know how to calculate causal inference and diagnostic inference in the attached picture?
Bayesian Network Example
There are lots of ways to perform inference from a Bayesian network, the most naive of which is just enumeration.
Enumeration works for both causal inference and diagnostic inference. The difference is finding out how likely the effect is based on evidence of the cause (causal inference) vs finding out how likely the cause is based on evidence of the effect (diagnostic inference).
The answer from Nick Larsen is a good one. I'll elaborate to give a worked solution to your problem since you might be looking for something a little more specific.
Problem 1: P(C|E). What is the probability of having a promising career (C=1) GIVEN the economic environment is positive (E=1)?
We use the factored structure of the Bayes net to write the full joint probability in terms of the factored variables.
Notice that you have just used the law of total probability to introduce the latent variables (S and J) and then marginalise (sum) them out. I have used the 'hat' to refer to not (~ in your question above). Notice too that once you have applied the rule of total probability, the Bayes net does a lot of the hard work for you by allowing you to factor the joint probability into a number of smaller conditional probabilities.
Problem 2: P(E|C). What is the probability that the economic environment is positive (E=1) GIVEN we observe that you have a promising career (C=1)?
Here we actually need to apply Bayes rule in the first line. Notice that you have an annoying normalising constant P(C) that is carried throughout. This term can be solved in much the same way as you solved Problem 1:
The computation of P(C=1|E=1) is solved in problem 1. I have left out the computation for P(C=0|E=1) = 0.5425 out but it is the same process as Problem 1.
Now you are in a position to solve for P(E|C) = .38/.65125 = .583
I have read about Levenshtein distance about the calculation of the distance between the two distinct words.
I have one source string and i have to match it with all 10,000 target words. The closest word should be returned.
The problem is I have given a list of 10,000 target words, and input source words is also huge.... So what shortest and efficient algorithm to apply here. Levenshtein distance calculation for each n every combination(brute force logic) would be very time consuming.
Any hints, or ideas are most welcome.
I guess it depends a little on how the words are structured. For example this guy improved the implementation based on the fact that he processes his words in order and does not repeat calculations for common prefixes. But if all your 10,000 words are totally different that won't do you much good. It's written in python so might be a bit of work involved to port to C.
There are also some kinda homebrew algorithms out there (with which I mean there is no official paper written about it) but that might do the trick.
There's two common approaches for this, and I've blogged about both. The simpler one to implement is BK-Trees - a tree datastructure that speeds lookup based on levenshtein distance by only searching relevant parts of the tree. They'll probably be perfectly sufficient for your use-case.
A more complicated but more efficient approach is Levenshtein Automata. This works by constructing an NFA that recognizes all words within levenshtein distance x of your target string, then iterating through it and the dictionary in lockstep, effectively performing a merge join on them.
Given two recorded voices in digital format, is there an algorithm to compare the two and return a coefficient of similarity?
I recommend to take a look into the HTK toolkit for speech recognition http://htk.eng.cam.ac.uk/, especially the part on feature extraction.
Features that I would assume to be good indicators:
Mel-Cepstrum coefficients (general timbre)
LPC (for the harmonics)
Given your clarification I think what you are looking for falls under speech recognition algorithms.
Even though you are only looking for the measure of similarity and not trying to turn speech into text, still the concepts are the same and I would not be surprised if a large part of the algorithms would be quite useful.
However, you will have to define this coefficient of similarity more formally and precisely to get anywhere.
EDIT:
I believe speech recognition algorithms would be useful because they do abstraction of the sound and comparison to some known forms. Conceptually this might not be that different from taking two recordings, abstracting them and comparing them.
From wikipedia article on HMM
"In speech recognition, the hidden
Markov model would output a sequence
of n-dimensional real-valued vectors
(with n being a small integer, such as
10), outputting one of these every 10
milliseconds. The vectors would
consist of cepstral coefficients,
which are obtained by taking a Fourier
transform of a short time window of
speech and decorrelating the spectrum
using a cosine transform, then taking
the first (most significant)
coefficients."
So if you run such an algorithm on both recordings you would end up with coefficients that represent the recordings and it might be far easier to measure and establish similarities between the two.
But again now you come to the question of defining the 'similarity coefficient' and introducing dogs and horses did not really help.
(Well it does a bit, but in terms of evaluating algorithms and choosing one over another, you will have to do better).
There are many different algorithms - the general name for this task is Speaker Identification - start with this Wikipedia page and work from there: http://en.wikipedia.org/wiki/Speaker_recognition
I'm not sure this will work for soundfiles, but it gives you an idea how to proceed i hope. That is a basic way how to find a pattern (image) in another image.
You first have to calculate the fft of both the soundfiles and then do a correlation. In formular it would look like (pseudocode):
fftSoundFile1 = fft(soundFile1);
fftConjSoundFile2 = conj(fft(soundFile2));
result_corr = real(ifft(soundFile1.*soundFile2));
Where fft= fast Fourier transform, ifft = inverse, conj = conjugate complex.
The fft is performed on the sample values of the soundfiles.
The peaks in the result_corr vector will then give you the positions of high correlation.
Note that both soundfiles must in this case be of the same size-otherwise you have to place the shorter one into a file of max(soundFileLength) vector.
Regards
Edit: .* means (in matlab style) a component wise mult, you must not do a vector mult!
Next Edit: Note that you have to operate with complex numbers - but there are several Complex classes out there so I think you don't have to bother about this.
I have a number of tracks recorded by a GPS, which more formally can be described as a number of line strings.
Now, some of the recorded tracks might be recordings of the same route, but because of inaccurasies in the GPS system, the fact that the recordings were made on separate occasions and that they might have been recorded travelling at different speeds, they won't match up perfectly, but still look close enough when viewed on a map by a human to determine that it's actually the same route that has been recorded.
I want to find an algorithm that calculates the similarity between two line strings. I have come up with some home grown methods to do this, but would like to know if this is a problem that's already has good algorithms to solve it.
How would you calculate the similarity, given that similar means represents the same path on a map?
Edit: For those unsure of what I'm talking about, please look at this link for a definition of what a line string is: http://msdn.microsoft.com/en-us/library/bb895372.aspx - I'm not asking about character strings.
Compute the Fréchet distance on each pair of tracks. The distance can be used to gauge the similarity of your tracks.
Math alert: Fréchet was a pioneer in the field of metric space which is relevant to your problem.
I would add a buffer around the first line based on the estimated probable error, and then determine if the second line fits entirely within the buffer.
To determine "same route," create the minimal set of normalized path vectors, calculate the total power differences and compare the total to a quality measure.
Normalize the GPS waypoints on total path length,
walk the vectors of the paths together, creating a new set of path vectors for each path based upon the shortest vector at each waypoint,
calculate the total power differences between endpoints of each vector in the normalized paths weighting for vector length, and
compare against a quality measure.
Tune the power of the differences (start with, say, squared differences) and the quality measure (say as a percent of the total power differences) visually. This algorithm produces a continuous quality measure of the path match as well as a binary result (Are the paths the same?)
Paul Tomblin said: I would add a buffer
around the first line based on the
estimated probable error, and then
determine if the second line fits
entirely within the buffer.
You could modify the algorithm as the normalized vector endpoints are compared. You could determine if any endpoint difference was above a certain size (implementing Paul's buffer idea) or perhaps, if the endpoints were outside the "buffer," use that fact to ignore that endpoint difference, allowing a comparison ignoring side trips.
You could walk along each point (Pa) of LineString A and measure the distance from Pa to the nearest line-segment of LineString B, averaging each of these distances.
This is not a quick or perfect method, but should be able to give use a useful number and is pretty quick to implement.
Do the line strings start and finish at similar points, or are they of very different extents?
If you consider a single line string to be a sequence of [x,y] points (or [x,y,z] points), then you could compute the similarity between each pair of line strings using the Needleman-Wunsch algorithm. As described in the referenced Wikipedia article, the Needleman-Wunsch algorithm requires a "similarity matrix" which defines the distance between a pair of points. However, it would be easy to use a function instead of a matrix. In your case you could simply use the 2D Euclidean distance function (or a 3D Euclidean function if your points have elevation) to provide the distance between each pair of points.
I actually side with the person (Aaron F) who said that you might be interested in the Levenshtein distance problem (and cited this). His answer seems to me to be the best so far.
More specifically, Levenshtein distance (also called edit distance), does not measure strictly the character-by-character distance, but also allows you to perform insertions and deletions. The best algorithm for this distance measure can be computed in quadratic time (pretty slow if your strings are long), but the computational biologists have pretty good heuristics for this, that might be of interest to you on their own. Check out BLAST and FASTA.
In your problem, it seems that you are dealing with differences between strings of numbers, and you care about the numbers. If you give more information, I might be able to direct you to the right variant of BLAST/FASTA/etc for your purposes. In any case, you might consider adapting BLAST and FASTA for your needs. They're quite simple.
1: http://en.wikipedia.org/wiki/Levenshtein_distance, http://www.nist.gov/dads/HTML/Levenshtein.html