I have a large database which contains data about the countries like country name,position, HDI(Human development index) and population. I need to classify this data in to some "K" number of groups based upon the population. One of my friend suggested that in this case K-means clustring will be useful. But I am thinking, this can be done directly by sorting the data according to population and then divide this sorted data in to groups. Are there some difference between these two approaches?
Thanks
Recursively splitting along one dimension results in a decision tree. It is a different data structure. All the cuts between the groups are along axis (horizontally or vertically). K-means can achieve a balance because the cuts are not necessarily horizontally or vertically (most of the time they aren't).
Actually, the decision tree approach is also very useful. You might well try it.
Related
I am trying to perform ANN, but my data is split into partitions or "tenants." Searches are always restricted to a single tenant, which represents a small percentage of the total documents.
I first tried implementing this using a filter on a tenant string attribute. However, I encountered this piece of documentation, that suggests the performance will be poor:
There is a small problem here however. If the eligibility list is small in relation to the number of items in the graph, skipping occurs with a high probability. This means that the algorithm needs to consider an exponentially increasing number of candidates, slowing down the search significantly. To solve this, Vespa.ai switches over to a brute-force search when this occurs. The result is a efficient ANN search when combined with filters.
What's the best way to solve my problem? Will partitioning my data into separate namespaces trigger the creation of a separate HNSW graph per namespace?
Performance will be fine, the query planner will just choose to not use the ANN index for these queries. You'll find lots of details on this topic, including how to tune this, in this blog post: https://blog.vespa.ai/constrained-approximate-nearest-neighbor-search/
If all your queries are towards a single tenant which is a small percentage of the total documents I don't think you necessarily need to create an HNSW index at all, but this depends on the absolute numbers and the largest "small percentage".
(Namespaces are not relevant here - their only purpose is to safely add a string to ids so that you can have multiple sources of ids and still be guaranteed global uniqueness.)
hope you are safe and well!
I have a question about regular or common ways of pair-matching if there is a database of users: say there are a few properties of each user, and when matching, each user could change the filtering options to only match those who fit their own requirement(so there is mutual selection between users), and we want to efficiently match 1000 users as precisely as possible.
For example, let's say there are 3 properties of every user: gender(female/male/other), study level(elementary/mediate/advanced), and grade(freshman/sophomore/junior/senior), and when matching, each user could choose to only match with people with their selected gender, study level and grade.
When focusing on 1 user, I could guess, on the perspective of database, we could use the filtering options in commands and get a list of those who satisfy both "my requirement" and "I fit their requirement"? However, I think this would be slow and asynchronous problems when there are 1000+ users in the matching phase at the same time?
I saw another post here discussed the blossom algorithm or greedy algorithm, which seem cool since if looking in a graph. Are they doable in this case? I guess if two users mutually fit both requirements, they would have an edge between the two nodes, and the value of edge could be comprehensive matching scores of 3 properties all together?
Anyway, I'm wondering is there a common way to do the pair matching precisely with at least 1000+ users at the same time?
Thank you so much!
If the requirement is that each match has to have the exact same properties, then the solution is fairly simple; just do a multiple criteria sort (ex. first sort by gender, then within each gender category sort by study level, etc.) and pair the identical users.
However, in a random dataset you're very unlikely to have perfect matches for all users. In that case you would want score pairs by how closely each category matches and use a more complex algorithm to maximize your overall matches. What you would do depends heavily on your use case and userbase size. Honestly, 1000 users is a very small number for modern computers; pretty much any polynomial time method (including blossom as you mentioned) would work fine.
Suppose the following: as input, one would get a record consisting of N numbers and booleans. This vector has to be compared to a database of vectors, which include M additional "result" elements. That means, the database holds P N+M sized vectors.
Each vector in the database holds as last element a boolean. The aim of the exercise is to find as fast a possible the record(s) which are closest match to the input vector AND have a resulting vector ending with a TRUE boolean.
To make the above a bit more comprehensible, give the following exampe:
A database with personal health information, consisting of records holding:
age
gender
weight
lenght
hearth issues (boolean)
lung issues (boolean)
residence
alternative plan Chosen (if done)
accepted offer
The program would then search get an input like
36 Male 185pound 68in FALSE FALSE NYC
It would then find out which plan would be the best to offer the client, based on what's in the database.
I know of a few methods which would help to do this, eg the levenshtein distance method. However, most methods would involve searching the entire database for the best matches.
Are there any algorithms, methods which would cut back on the processing power/time required? I can't imagine that eg. insurance agencies don't use more efficient methods to search their databases...
Any insights into this area would be greatly appreciated!
Assumption: this is a relational database. If instead it were NOSQL then please provide more info on which db.
Do you have option to create bitmap indexes? They can cut down the # of records returned . That is useful for almost all of the columsn since the cardinalities are low.
After that the only one left is the residence, and you should use a Geo distance for that.
If you are unable to create bitmap indexes then what are your filtering options? If none then you have to do a full table scan.
For each of the components e.g. age, gender, etc. you need to
(a) determine a distance metric
(b) determine how to compute both the metric and the distance between different records.
I'm not sure an Levenshtein would work here - you need to take each field separately to find their contribution to the whole distance measure.
The situation and the goal
Imagine a user search system that provides a proximity search from a user’s own position, which is specified by a decimal latitude/longitude combination. An Atlanta resident’s position, for instance, would be represented by 33.756944,-84.390278 and a perimeter search by this user should yield other users in his area from a radius of 10 mi, 50 mi, and so on.
A table-valued function calculates distances and provides users accordingly, ordered by ascending distance to the user that started the search. It’s always a live query, and it’s a tough and frequent one. Now, we want to built some sort of caching to reduce load.
On the way to solutions
So far, all users were grouped by the integer portion of their lat/long. The idea is to create cache files with all users from a grid square, so accessing the relevant cache file would be easy. If a grid square contains more users than a cache file should, the square is quartered or further divided into eight pieces and so on. To fully utilize a square and its cache file, multiple overlaying squares are contemplated. One deficiency of this approach is that gridding and quartering high density metropolitan areas and spacious countrysides into overlaying cache files may not be optimal.
Reading on, I stumbled upon topics like nearest neighbor searches, the Manhattan distance and tree-esque space partitioning techniques like a k-d tree, a quadtree or binary space partitioning. Also, SQL Server provides its own geographical datatypes and functions (though I’d guess the pure-mathematical FLOAT way has an adequate performance). And of course, the crux is making user-centric proximity searches cacheable.
Question!
I haven’t found much resources on this, but I’m sure I’m not the first one with this plan. Remember, it’s not about the search, but about caching.
Can I scrap my approach?
Are there ways of an advantageous partitioning of users into geographical divisions of equal size?
Is there a best practice for storing spatial user information for efficient proximity searches?
What do you think of the techniques mentioned above (quadtrees, etc.) and how would you pair them with caching?
Do you know an example of successfully caching user-specific proximity search?
Can I scrap my approach?
You can adapt your appoach, because as you already noted, a quadtree uses this technic. Or you use a geo spatial extension. That is available for MySql, too.
Are there ways of an advantageous partitioning of users into
geographical divisions of equal size
A simple fixed grid of equal size is fine when locations are equally distributed or if the area is very small. Geo locations are hardly equal distributed. Usually a geo spatial structure is used. see next answer:
Is there a best practice for storing spatial user information for
efficient proximity searches
Quadtree, k-dTree or R-Tree.
What do you think of the techniques mentioned above (quadtrees, etc.) and how would you pair them with caching?
There is some work from Hannan Samet, which describes Quadtrees and caching.
I want to do pre-clustering for a set of approx. 500,000 points.
I haven't started yet but this is what I had thought I would do:
store all points in a localSOLR index
determine "natural cluster positions" according to some administrative information (big cities for example)
and then calculate a cluster for each city:
for each city
for each zoom level
query the index to get the points contained in a radius around the city (the length of the radius depends on the zoom level)
This should be quite efficient because there are only 100 major cities and SOLR queries are very fast. But a little more thinking revealed it was wrong:
there may be clusters of points that are more "near" one another than near a city: they should get their own cluster
at some zoom levels, some points will not be within the acceptable distance of any city, and so they will not be counted
some cities are near one another and therefore, some points will be counted twice (added to both clusters)
There are other approaches:
examine each point and determine to which cluster it belongs; this eliminates problems 2 and 3 above, but not 1, and is also extremely inefficient
make a (rectangular) grid (for each zoom level); this works but will result in crazy / arbitrary clusters that don't "mean" anything
I guess I'm looking for a general purpose geo-clustering algorithm (or idea) and can't seem to find any.
Edit to answer comment from Geert-Jan
I'd like to build "natural" clusters, yes, and yes I'm afraid that if I use an arbitrary grid, it will not reflect the reality of the data. For example if there are many events that occur around a point that is at or near the intersection of two rectangles, I should get just one cluster but will in fact build two (one in each rectangle).
Originally I wanted to use localSOLR for performance reasons (and because I know it, and have better experience indexing a lot of data into SOLR than loading it in a conventional database); but since we're talking of pre-clustering, maybe performance is not that important (although it should not take days to visualize a result of a new clustering experiment). My first approach of querying lots of points according to a predefined set of "big points" is clearly flawed anyway, the first reason I mentioned being the strongest: clusters should reflect the reality of the data, and not some other bureaucratic definition (they will clearly overlap, sure, but data should come first).
There is a great clusterer for live clustering, that has been added to the core Google Maps API: Marker Clusterer. I wonder if anyone has tried to run it "offline": run it for any amount of time it needs, and then store the results?
Or is there a clusterer that examines each point, point after point, and outputs clusters with their coordinates and number of points included, and which does this in a reasonable amount of time?
You may want to look into advanced clustering algorithms such as OPTICS.
With a good database index, it should be fairly fast.