Find all lat/long pairs within a given distance from a lat/long pair - database

I have a database with millions of lat/long pairs. I would like to implement a function to gather all lat/long pairs within a specified distance from a given lat/long pair. Is there a better way to do this than by iterating over each pair in the database and computing the distance between that pair and the given pair? I'd like to avoid brute force if I can avoid doing so!
I would like to add that I will never be searching for lat/long pairs greater than 1 mile from the given lat/long pair.

Many databases support storage of spatial types directly, and include spatial queries. This will handle the distance computation correctly for you, as well as provide a far more efficient means of pulling the information.
For examples, see:
Spatial Data in SQL Server
Geometric types in PostgreSQL
MySQL Spatial Extensions
SpatiaLite

What you can do is cluster the database beforehand. In this case you would divide the database into, say, 3-mile clusters. Then when you do the search you only need to compare points within the same cluster.

Related

Get projection limits from Postgis

I receive spatial queries to my API in lat/lon coordinate pairs. My spatial data is in projections that don't cover the entire globe, which follows that some queries are out of bounds.
I'd like to respond to wrong queries with helpful error message. Rather than try to find out some in GIS specifications or standards what boundaries each projection has (and getting correct lat/lon pairs from those), I'd like to know wether I could either ask the limits from Postgis or ask if specific point is within limits and in what way it's wrong. This way I could support many projections easily.
It looks like Postgis has this information because for wrong query it answers thus:
transform: couldn't project point (-77.0331 -12.1251 0):
latitude or longitude exceeded limits (-14)
I'm using Postgis through Geodjango distance query (django.contrib.gis.geos.geometry.GEOSGeometry.distance function).
PostGIS or PROJ.4 (follow this thread) don't have these bounds. Each projection's bounds are unique, and are traditionally published by the authority that designed the projection.
One of the primary sources for this data is from https://www.epsg-registry.org click "retrieve by code" and (e.g.) 27200, and view the "Area of Use" fields.
Much of the same information is repeated at (e.g.) http://epsg.io/27200 and look for "bounds".
If you need this data, I suggest you make a new table to collect it.

How to find the smallest Euclidean vector difference using a database engine?

I want to store thousands of ~100 element vectors in a database, and then I need to search for the record with the smallest difference.
e.g. when comparing [4,9,3] and [5,7,2], take the element-wise diff: [-1,2,1] and then compute the Euclidean length: sqrt(1+4+1) = 2.45.
I need to be able to search for the record containing this lowest value.
I don't think I can do efficiently in MySQL. I hear Solr or Elastisearch might provide a solution; can someone point me towards or post an example of how this kind of search can be done (efficiently)?
I think the answer to your question is here
But this is also quite interesting link
Unfortunately, In general you have to compare input vector to all other in database.
Maybe, if you know something more about your data, you can separate your data to smaller subset of vectors and decrease the complexity of comparisons.
In PostgreSQL database, you can use C++ extensibility to write your own function like here or use K-Nearest-Neighbor Indexing. When the GPU is available, you can look on this GPU-based PostgreSQL Extensions for Scalable High-throughput Pattern Matching
for extensions of the PostgreSQL.

Efficiently search large DB for similar records

Suppose the following: as input, one would get a record consisting of N numbers and booleans. This vector has to be compared to a database of vectors, which include M additional "result" elements. That means, the database holds P N+M sized vectors.
Each vector in the database holds as last element a boolean. The aim of the exercise is to find as fast a possible the record(s) which are closest match to the input vector AND have a resulting vector ending with a TRUE boolean.
To make the above a bit more comprehensible, give the following exampe:
A database with personal health information, consisting of records holding:
age
gender
weight
lenght
hearth issues (boolean)
lung issues (boolean)
residence
alternative plan Chosen (if done)
accepted offer
The program would then search get an input like
36 Male 185pound 68in FALSE FALSE NYC
It would then find out which plan would be the best to offer the client, based on what's in the database.
I know of a few methods which would help to do this, eg the levenshtein distance method. However, most methods would involve searching the entire database for the best matches.
Are there any algorithms, methods which would cut back on the processing power/time required? I can't imagine that eg. insurance agencies don't use more efficient methods to search their databases...
Any insights into this area would be greatly appreciated!
Assumption: this is a relational database. If instead it were NOSQL then please provide more info on which db.
Do you have option to create bitmap indexes? They can cut down the # of records returned . That is useful for almost all of the columsn since the cardinalities are low.
After that the only one left is the residence, and you should use a Geo distance for that.
If you are unable to create bitmap indexes then what are your filtering options? If none then you have to do a full table scan.
For each of the components e.g. age, gender, etc. you need to
(a) determine a distance metric
(b) determine how to compute both the metric and the distance between different records.
I'm not sure an Levenshtein would work here - you need to take each field separately to find their contribution to the whole distance measure.

calculate or save spatial data

When working with spatial data in a data base like Postgis, is it a good way to calculate on every SELECT the intersections of two polygons or the area of polygons? Or is it better for performance issues to do the calculations on an INSERT, UPDATE or DELETE-statement and save the results in a column of the tables? How is the approach in big spatial data bases?
Thanks for an answer.
The questuion is too abstract.
Of course if you use the intersection area (ST_Intersection) you should store the result of ST_Intersection geometry. But in practice we often have to calculate intersection on-fly, because entrance arguments depend on dynamical parameters (e.g. intersection of an area with temperature <30C with and area of wind > 20 ms). By the way you can use VIEW to simplify a query in that way.
Of course if your table contains both geometrical column-arguments or one of them is constant it's better to store the intersection. Particularly you can build spatial indexes for this column.
There is no any constant rules. You should be guided by practice conditions: size of database, type of using etc. For example I store the generated ellipse (belief zone) for point of lightning stroke, but I don't store facts of intersectioning (boolean) with powerlines because those intersetionings may be parametrized.

Clustering Lat/Longs in a Database

I'm trying to see if anyone knows how to cluster some Lat/Long results, using a database, to reduce the number of results sent over the wire to the application.
There are a number of resources about how to cluster, either on the client side OR in the server (application) side .. but not in the database side :(
This is a similar question, asked by a fellow S.O. member. The solutions are server side based (ie. C# code behind).
Has anyone had any luck or experience with solving this, but in a database? Are there any database guru's out there who are after a hawt and sexy DB challenge?
please help :)
EDIT 1: Clarification - by clustering, i'm hoping to group x number of points into a single point, for an area. So, if i say cluster everything in a 1 mile / 1 km square, then all the results in that 'square' are GROUP'D into a single result (say ... the middle of the square).
EDIT 2: I'm using MS Sql 2008, but i'm open to hearing if there are other solutions in other DB's.
I'd probably use a modified* version of k-means clustering using the cartesian (e.g. WGS-84 ECF) coordinates for your points. It's easy to implement & converges quickly, and adapts to your data no matter what it looks like. Plus, you can pick k to suit your bandwidth requirements, and each cluster will have the same number of associated points (mod k).
I'd make a table of cluster centroids, and add a field to the original data table to indicate what cluster it belonged too. You'd obviously want to update the clustering periodically if your data is at all dynamic. I don't know if you could do that with a stored procedure & trigger, but perhaps.
*The "modification" would be to adjust the length of the computed centroid vectors so they'd be on the surface of the earth. Otherwise you'd end up with a bunch of points with negative altitude (when converted back to LLH).
If you're clustering on geographic location, and I can't imagine it being anything else :-), you could store the "cluster ID" in the database along with the lat/long co-ordinates.
What I mean by that is to divide the world map into (for example) a 100x100 matrix (10,000 clusters) and each co-ordinate gets assigned to one of those clusters.
Then, you can detect very close coordinates by selecting those in the same square and moderately close ones by selecting those in adjacent squares.
The size of your squares (and therefore the number of them) will be decided by how accurate you need the clustering to be. Obviously, if you only have a 2x2 matrix, you could get some clustering of co-ordinates that are a long way apart.
You will always have the edge cases such as two points close together but in different clusters (one northernmost in one cluster, the other southernmost in another) but you could adjust the cluster size OR post-process the results on the client side.
I did a similar thing for a geographic application where I wanted to ensure I could cache point sets easily. My geohashing code looks like this:
def compute_chunk(latitude, longitude)
(floor_lon(longitude) * 0x1000) | floor_lat(latitude)
end
def floor_lon(longitude)
((longitude + 180) * 10).to_i
end
def floor_lat(latitude)
((latitude + 90) * 10).to_i
end
Everything got really easy from there. I had some code for grabbing all of the chunks from a given point to a given radius that would translate into a single memcache multiget (and some code to backfill that when it was missing).
For movielandmarks.com I used the clustering code from Mike Purvis, one of the authors of Beginning Google Maps Applications with PHP and AJAX. It builds trees of clusters/points for different zoom levels using PHP and MySQL, storing it in the database so that recall is very fast. Some of it may be useful to you even if you are using a different database.
Why not testing multiple approaches?
translate the weka library in .NET CLI with IKVM.NET
add an assembly resulted from your code and weka.dll (use ilmerge) into your database
Make some tests, that is. No specific clustering works better than anyone else.
I believe you can use MSSQL's spatial data types. If they are similar to other spatial data types I know, they will store your points in a tree of rectangles, and then you can go to the lower-resolution rectangles to get implicit clusters.
If you end up wanting to explore Geohash's (which were invented at exactly the same time you posted this question), here's a more fleshed-out implementation of Geohash related functions for SQL Server's TSQL in which you might be interested.
QalGeohash-TSQL
I have used the Integer version of the Geohash extensively to cluster results to reduce data sent to a client for a limited viewport.

Resources