I'm wondering if there is a built in function in R that can find the cosine similarity (or cosine distance) between two arrays?
Currently, I implemented my own function, but I can't help but think that R should already come with one.
These sort of questions come up all the time (for me--and as evidenced by the r-tagged SO question list--others as well):
is there a function, either in R core or in any R Package, that does x? and if so,
where can i find it among the +2000 R Packages in CRAN?
short answer: give the sos package a try when these sort of questions come up
One of the earlier answers gave cosine along with a link to its help page. This is probably exactly what the OP wants. When you look at the linked-to page you see that this function is in the lsa package.
But how would you find this function if you didn't already know which Package to look for it in?
you can always try the standard R help functions (">" below just means the R command line):
> ?<some_name>
> ??<some_name>
> *apropos*<some_name>
if these fail, then install & load the sos package, then
***findFn***
findFn is also aliased to "???", though i don't often use that because i don't think you can pass in arguments other than the function name
for the question here, try this:
> library(sos)
> findFn("cosine", maxPages=2, sortby="MaxScore")
The additional arguments passed in ("maxPages=2" and "sortby="MaxScore") just limits the number of results returned, and specifies how the results are ranked, respectively--ie, "find a function named 'cosine' or that has the term 'cosine' in the function description, only return two pages of results, and order them by descending relevance score"
The findFn call above returns a data frame with nine columns and the results as rows--rendered as HTML.
Scanning the last column, Description and Link, item (row) 21 you find:
Cosine Measures (Matrices)
this text is also a link; clicking on it takes you to the help page for that function in the Package which contains that function--in other words
using findFn, you can pretty quickly find the function you want even though you have no idea which Package it's in
It looks like a few options are already available, but I just stumbled across an idiomatic solution I like so I thought I'd add it to the list.
install.packages('proxy') # Let's be honest, you've never heard of this before.
library('proxy') # Library of similarity/dissimilarity measures for 'dist()'
dist(m, method="cosine")
Taking the comment from Jonathan Chang I wrote this function to mimic dist. No extra packages to load.
cosineDist <- function(x){
as.dist(1 - x%*%t(x)/(sqrt(rowSums(x^2) %*% t(rowSums(x^2)))))
}
Check these functions lsa::cosine(), clv::dot_product() and arules::dissimilarity()
You can also check the vegan package: http://cran.r-project.org/web/packages/vegan//index.html
The function vegdist in this package has a variety of dissimilarity (distance) functions, such as manhattan, euclidean, canberra, bray, kulczynski, jaccard, gower, altGower, morisita, horn,mountford, raup , binomial, chao or cao. Please check the .pdf in the package for a definition or consult references https://stats.stackexchange.com/a/33001/12733.
If you have a dot product matrix, you can use this function to compute the cosine similarity matrix:
get_cos = function(S){
doc_norm = apply(as.matrix(dt),1,function(x) norm(as.matrix(x),"f"))
divide_one_norm = S/doc_norm
cosine = t(divide_one_norm)/doc_norm
return (cosine)
}
Input S is the matrix of dot product. Simply, S = dt %*% t(dt), where dt is your dataset.
This function is basically to divide the dot product by the norms of vectors.
The cosine similarity is not invariant to shift. The correlation similarity maybe a better choice because fixes this problem and it is also connected to squared Euclidean distances (if data are standardized)
If you have two objects described by p-dimensional vectors of features,
x1 and x2 both of dimension p, you can compute the correlation similarity by cor(x1, x2).
Note that in statistics correlation is used as a scaled moment notion, so it is naturally thought as correlation between random variables. The cor(dataset) function will compute correlations between columns of the data matrix.
In a typical situation with a (n x p) data matrix X, with units (or objects) on its rows, and variables (or features) on its columns you can compute the correlation similarity matrix simply by computing cor on the transpose of X, and giving the result object a dist class
as.distance(cor(t(X)))
By the way you can compute correlation dissimilarity matrix the same way. The following make a distinction about the size of the angle and the orientation between objects' vectors
1 - cor(t(X))
This one doesn't care about the orientation, only size of the angle
1 - abs(cor(t(X)))
Related
I'm trying to find the (numerical) curvature at specific points. I have data stored in an array, and I essentially want to find the local curvature at every separate point. I've searched around, and found three different implementations for this in MATLAB: diff, gradient, and del2.
If my array's name is arr I have tried the following implementations:
curvature = diff(diff(arr));
curvature = diff(arr,2);
curvature = gradient(gradient(arr));
curvature = del2(arr);
The first two seem to output the same values. This makes sense, because they're essentially the same implementation. However, the gradient and del2 implementations give different values from each other and from diff.
I can't figure out from the documentation precisely how the implementations work. My guess is that some of them are some type of two-sided derivative, and some of them are not two-sided derivatives. Another thing that confuses me is that my current implementations use only the data from arr. arr is my y-axis data, the x-axis essentially being time. Do these functions default to a stepsize of 1 or something like that?
If it helps, I want an implementation that takes the curvature at the current point using only previous array elements. For context, my data is such that a curvature calculation based on data in the future of the current point wouldn't be useful for my purposes.
tl;dr I need a rigorous curvature at a point implementation that uses only data to the left of the point.
Edit: I kind of better understand what's going on based on this, thanks to the answers below. This is what I'm referring to:
gradient calculates the central difference for interior data points.
For example, consider a matrix with unit-spaced data, A, that has
horizontal gradient G = gradient(A). The interior gradient values,
G(:,j), are
G(:,j) = 0.5*(A(:,j+1) - A(:,j-1)); The subscript j varies between 2
and N-1, with N = size(A,2).
Even so, I still want to know how to do a "lefthand" computation.
diff is simply the difference between two adjacent elements in arr, which is exactly why you lose 1 element for using diff once. For example, 10 elements in an array only have 9 differences.
gradient and del2 are for derivatives. Of course, you can use diff to approximate derivative by dividing the difference by the steps. Usually the step is equally-spaced, but it does not have to be. This answers your question why x is not used in the calculation. I mean, it's okay that your x is not uniform-spaced.
So, why gradient gives us an array with the same length of the original array? It is clearly explained in the manual how the boundary is handled,
The gradient values along the edges of the matrix are calculated with single->sided differences, so that
G(:,1) = A(:,2) - A(:,1);
G(:,N) = A(:,N) - A(:,N-1);
Double-gradient and del2 are not necessarily the same, although they are highly correlated. It's all because how you calculate/approximate the 2nd-oder derivatives. The difference is, the former approximates the 2nd derivative by doing 1st derivative twice and the latter directly approximates the 2nd derivative. Please refer to the help manual, the formula are documented.
Okay, do you really want curvature or the 2nd derivative for each point on arr? They are very different. https://en.wikipedia.org/wiki/Curvature#Precise_definition
You can use diff to get the 2nd derivative from the left. Since diff takes the difference from right to left, e.g. x(2)-x(1), you can first flip x from left to right, then use diff. Some codes like,
x=fliplr(x)
first=x./h
second=diff(first)./h
where h is the space between x. Notice I use ./, which idicates that h can be an array (i.e. non-uniform spaced).
I am using OpenSURF to find best matches in two images. It finds the matching points. I am wondering how I can know the degree of similarity between two matched points ((how strong the match is). I would appreciate your help.
Thanks.
This is well documented in the literature, including the SURF paper itself. You simply find the distance (e.g. Euclidean, Mahalanobis) between the descriptor vectors. Since the squared distance is faster to compute (it avoids a square root), you might also see the dot product of the vectors used instead since it is equivalent to the squared Euclidean distance.
Standard practice is then to decide whether or not to accept a match based on this distance and a threshold. The SIFT paper (Lowe 2004) gives a slightly more complicated way of accepting matches if I recall correctly, so you might want to read that too.
In OpenSURF, the descriptors are float vectors stored in the Ipoint class - so once you have called Surf.getDescriptors and populated the Ipoint vector given to the constructor, you simply get the Ipoint.desctiptor fields of a pair of Ipoints and compute the distance.
I would like to generate pseudo data that conforms to the distribution of actual sampled data. Looking for an efficient and accurate method in C/Obj-C for iphone development. Currently the occurrance of 60 different categories in 1000 sampled events has been assigned a probability (0-1). I want to generate 1000 new events which conform to the same probabilities.
Clarification {
I have a categorical distribution of set {1,2,...,60}. I understand that samples from this distribution will conform to the probabilities of each category. Therefore I need to take 1000 samples from this distribution. I have determined (thanks to answers so far) that I need to:
Normalize this distribution by summing the values and dividing each
by the sum.
Order them.
Create a CDF by replacing each value with the sum of all previous values.
Then I can generate a uniform random number between 0 and 1, and find the greatest number in the CDF whose value is less than or equal to the number just chosen, and return the category corresponding to this CDF value.
}
Q1. Is this the correct way to solve the problem?
Q2. The caveat still holds that I'm using NSDecimals to store the category probabilities. Are there any libraries available or functions in Cocoa or Math.h, etc. that I can use to do this simply? I'm open to trying new libraries, currently only have Core-Plot and the standard Cocoa libraries in this project. Thanks.
Your problem description is unclear. But it sounds like you're looking for inverse transform sampling.
Basically, you first need to generate a cumulative distribution function (CDF) corresponding to your original data; call it F(x). You then generate uniform random data in the range 0->1, and then transform it using the inverse CDF, i.e F-1(x).
Here's my suggestion. This assumes that when you say "normalized probability" you mean the sum of the probability of all types is 1. (If not, you'll need to rescale so that's the case.)
Make up some order for your 60 types. (Say, alphabetic.)
Generate a random number between 0 and 1. (Call it your "target".)
Create an accumulator, initially at 0.
Loop through your 60 types. For each type:
Add the probability of that type of event to your accumulator.
If your accumulator is >= your target, generate an event of that type and stop.
If you do that 1000 times, I believe you'll get the distribution you're looking for.
I am working in a chemistry/biology project. We are building a web-application for fast matching of the user's experimental data with predicted data in a reference database. The reference database will contain up to a million entries. The data for one entry is a list (vector) of tuples containing a float value between 0.0 and 20.0 and an integer value between 1 and 18. For instance (7.2394 , 2) , (7.4011, 1) , (9.9367, 3) , ... etc.
The user will enter a similar list of tuples and the web-app must then return the - let's say - top 50 best matching database entries.
One thing is crucial: the search algorithm must allow for discrepancies between the query data and the reference data because both can contain small errors in the float values (NOT in the integer values). (The query data can contain errors because it is derived from a real-life experiment and the reference data because it is the result of a prediction.)
Edit - Moved text to answer -
How can we get an efficient ranking of 1 query on 1 million records?
You should add a physicist to the project :-) This is a very common problem to compare functions e.g. look here:
http://en.wikipedia.org/wiki/Autocorrelation
http://en.wikipedia.org/wiki/Correlation_function
In the first link you can read: "The SEQUEST algorithm for analyzing mass spectra makes use of autocorrelation in conjunction with cross-correlation to score the similarity of an observed spectrum to an idealized spectrum representing a peptide."
An efficient linear scan of 1 million records of that type should take a fraction of a second on a modern machine; a compiled loop should be able to do it at about memory bandwidth, which would transfer that in a two or three milliseconds.
But, if you really need to optimise this, you could construct a hash table of the integer values, which would divide the job by the number of integer bins. And, if the data is stored sorted by the floats, that improves the locality of matching by those; you know you can stop once you're out of tolerance. Storing the offsets of each of a number of bins would give you a position to start.
I guess I don't see the need for a fancy algorithm yet... describe the problem a bit more, perhaps (you can assume a fairly high level of chemistry and physics knowledge if you like; I'm a physicist by training)?
Ok, given the extra info, I still see no need for anything better than a direct linear search, if there's only 1 million reference vectors and the algorithm is that simple. I just tried it, and even a pure Python implementation of linear scan took only around three seconds. It took several times longer to make up some random data to test with. This does somewhat depend on the rather lunatic level of optimisation in Python's sorting library, but that's the advantage of high level languages.
from cmath import *
import random
r = [(random.uniform(0,20), random.randint(1,18)) for i in range(1000000)]
# this is a decorate-sort-undecorate pattern
# look for matches to (7,9)
# obviously, you can use whatever distance expression you want
zz=[(abs((7-x)+(9-y)),x,y) for x,y in r]
zz.sort()
# return the 50 best matches
[(x,y) for a,x,y in zz[:50]]
Can't you sort the tuples and perform binary search on the sorted array ?
I assume your database is done once for all, and the positions of the entries is not important. You can sort this array so that the tuples are in a given order. When a tuple is entered by the user, you just look in the middle of the sorted array. If the query value is larger of the center value, you repeat the work on the upper half, otherwise on the lower one.
Worst case is log(n)
If you can "map" your reference data to x-y coordinates on a plane there is a nifty technique which allows you to select all points under a given distance/tolerance (using Hilbert curves).
Here is a detailed example.
One approach we are trying ourselves which allows for the discrepancies between query and reference is by binning the float values. We are testing and want to offer the user the choice of different bin sizes. Bin sizes will be 0.1 , 0.2 , 0.3 or 0.4. So binning leaves us with between 50 and 200 bins, each with a corresponding integer value between 0 and 18, where 0 means there was no value within that bin. The reference data can be pre-binned and stored in the database. We can then take the binned query data and compare it with the reference data. One approach could be for all bins, subtract the query integer value from the reference integer value. By summing up all differences we get the similarity score, with the the most similar reference entries resulting in the lowest scores.
Another (simpler) search option we want to offer is where the user only enters the float values. The integer values in both query as reference list can then be set to 1. We then use Hamming distance to compute the difference between the query and the reference binned values. I have previously asked about an efficient algorithm for that search.
This binning is only one way of achieving our goal. I am open to other suggestions. Perhaps we can use Principal Component Analysis (PCA), as described here
I have a system that stores vectors and allows a user to find the n most similar vectors to the user's query vector. That is, a user submits a vector (I call it a query vector) and my system spits out "here are the n most similar vectors." I generate the similar vectors using a KD-Tree and everything works well, but I want to do more. I want to present a list of the n most similar vectors even if the user doesn't submit a complete vector (a vector with missing values). That is, if a user submits a vector with three dimensions, I still want to find the n nearest vectors (stored vectors are of 11 dimensions) I have stored.
I have a couple of obvious solutions, but I'm not sure either one seem very good:
Create multiple KD-Trees each built using the most popular subset of dimensions a user will search for. That is, if a user submits a query vector of thee dimensions, x, y, z, I match that query to my already built KD-Tree which only contains vectors of three dimensions, x, y, z.
Ignore KD-Trees when a user submits a query vector with missing values and compare the query vector to the vectors (stored in a table in a DB) one by one using something like a dot product.
This has to be a common problem, any suggestions? Thanks for the help.
Your first solution might be fastest for queries (since the tree-building doesn't consider splits in directions that you don't care about), but it would definitely use a lot of memory. And if you have to rebuild the trees repeatedly, it could get slow.
The second option looks very slow unless you only have a few points. And if that's the case, you probably didn't need a kd-tree in the first place :)
I think the best solution involves getting your hands dirty in the code that you're working with. Presumably the nearest-neighbor search computes the distance between the point in the tree leaf and the query vector; you should be able to modify this to handle the case where the point and the query vector are different sizes. E.g. if the points in the tree are given in 3D, but your query vector is only length 2, then the "distance" between the point (p0, p1, p2) and the query vector (x0, x1) would be
sqrt( (p0-x0)^2 + (p1-x1)^2 )
I didn't dig into the java code that you linked to, but I can try to find exactly where the change would need to go if you need help.
-Chris
PS - you might not need the sqrt in the equation above, since distance squared is usually equivalent.
EDIT
Sorry, didn't realize it would be so obvious in the source code. You should use this version of the neighbor function:
nearest(double [] key, int n, Checker<T> checker)
And implement your own Checker class; see their EuclideanDistance.java to see the Euclidean version. You may also need to comment out any KeySizeException that the query code throws, since you know that you can handle differently sized keys.
Your second option looks like a reasonable solution for what you want.
You could also populate the missing dimensions with the most important( or average or whatever you think it should be) values if there are any.
You could try using the existing KD tree -- by taking both branches when the split is for a dimension that is not supplied by the source vector. This should take less time than doing a brute force search, and might be less trouble than trying to maintain a bunch of specialized trees for dimension subsets.
You would need to adapt your N-closest algorithm (without more info I can't advise you on that...), and for distance you would use the sum of the squares of only those elements supplied by the source vector.
Here's what I ended up doing: When a user didn't specify a value (when their query vector lacked a dimension), I I simply adjusted my matching range (in the API) to something huge so that I match any value.