We are currently investigating MongoDB as a possible solution for a highly distributed database for scientific data. Given our query requirements we have chosen to go for a single collection consisting of documents, each document representing an object and its properties which number ~450. A typical document would be structured as follows:
d = {'patch': '12345-1,1',
'X': { (120 key-value pairs) },
'A': { (64 key-value pairs) },
...
(4 more such embedded documents)
}
Within X, there is an integer flag. The flag is a 32 bit integer, each bit representing a Boolean flag. This is a common method of storing boolean flags when their number is rather large. There is a lookup table which shows which position corresponds to what Boolean property. There is a 15th bit which is of relevance to our specific set of queries. The total number of documents are 600,000, sharded across 3 desktops (8GB RAM and i7 CPU, standard 5400 RPM spinning hard drives).
The query being written is a simple - we want a count of all documents for which the 15th bit of a particular flag integer is set to 1.
db.coll.find(
{'X.flag1': {$bitsAllSet: [14]}}
).count()
The average time taken for this query is 19,783 ms. This is not an acceptable time for us. We tried to improve this using an aggregation instead of a standard find() based query.
db.coll.aggregate([
'$match': {
'X.flag1': {$bitsAllSet: [14]}
},
'$group': {
_id: 0,
count: {$sum: 1}
}
])
This takes about 10,000 ms. While this is an improvement (which I think is because of the highly efficient C++ implementation of the Aggregation framework), it is still beyond the kind of performance we desire. The next step was to actually isolate the flag hidden in the 15th bit and make it a separate key in the document. This would result in the same queries as above but instead of using $bitsAllSet: [14], we would use X.is_primary: 1. For find() and aggregate() the respective times were 19,000 ms and 8,500 ms respectively. There is very little improvement.
So, my two questions which I hope people may help with are:
Is this the final performance I can expect from MongoDB Community Edition? I am aware that there is an Enterprise Edition which will come with an In-Memory Engine. But my question is more specific to the Community Edition. Is there any trick that I could use to speed up the query?
I am slowly finding that atleast for complex server side analytics and querying that we need, MongoDB is proving to be hard to use both in terms of the complexity of queries we are writing as well as the performance bottle necks. Any advice on what other databases I may consider.
Edit: As suggested, I am sharing the output of the .explain(). The output is for a collection where is_primary is not indexed. But as discussed in the comments section, for a Boolean value, the presence of an index should not make a difference to the performance a query based on Boolean flags.
Pastebin Link (expiry of 2 weeks)
Related
I've finished my first semester in a college-level SQL course where we used "SQL queries for Mere Mortals" 3rd edition.
Long term I want to work in data governance or as a data scientist, so digging deeper is needed and I found the Stanford SQL course. Today taking the first mini quiz, I got the answers right but on these two I'm not understanding WHY I got the answers right.
My 'SQL for Mere Mortals' book doesn't even cover hash or tree-based indexes so I've been searching online for them.
I mostly guessed based on what she said but it feels more like luck than "I solidly understand why". So I've ordered "Introduction to Algorithms" 3rd edition by Thomas Cormen and it arrived last week but it will take me a while to read through all 1,229 pages.
Found that book in this other stackoverflow link =>https://stackoverflow.com/questions/66515417/why-is-hash-function-fast
Stanford Course => https://www.edx.org/course/databases-5-sql
I thought a hash index on College.enrollment would not speed up because they limit it to less than a number vs an actual number ?? I'm guessing per this link Better to use "less than equal" or "in" in sql query that the query would be faster if we used "<=" rather than "<" ?
This one was just a process of elimination as it mentions the first item after the WHERE clause, but then was confusing as it mentions the last part of Apply.cName = College.cName.
My questions:
I'm guessing that similar to algebra having numerators and denominators, quotients, and many other terms that specifically describe part of an equation using technical terms. How would you use technical terms to describe why these answers are correct.
On the second question, why is the first part of the second line referenced and the last part of the same line referenced as the answers. Why didn't they pick the first part of each of the last part of each?
For context, most of my SQL queries are written for PostgreSQL now within PyCharm on python but I do a lot of practice using the PgAgmin4 or MySqlWorkbench desktop platforms.
I welcome any recommendations you have on paper books or pdf's that have step-by-step tutorials as many, many websites have holes or reference technical details that are confusing.
Thanks
1. A hash index is only useful for equality matches, whereas a tree index can be used for inequality (< or >= etc).
With this in mind, College.enrollment < 5000 cannot use a hash index, as it is an inequality. All other options are exact equality matches.
This is why most RDBMSs only let you create tree-based indexes.
2. This one is pretty much up in the air.
"the first item after the WHERE clause" is not relevant. Most RDBMSs will reorder the joins and filters as they see fit in order to match indexes and table statistics.
I note that the query as given is poorly written. It should use proper JOIN syntax, which is much clearer, and has been in use for 30 years already.
SELECT * -- you should really specify exact columns
FROM Student AS s -- use aliases
JOIN [Apply] AS a ON a.sID = s.sID -- Apply is a reserved keyword in many RDBMS
JOIN College AS c ON c.cName = a.aName
WHERE s.GPA > 1.5 AND c.cName < 'Cornell';
Now it's hard to say what a compiler would do here. A lot depends on the cardinalities (size of tables) in absolute terms and relative to each other, as well as the data skew in s.GPA and c.cName.
It also depends on whether secondary key (or indeed INCLUDE) columns are added, this is clearly not being considered.
Given the options for indexes you have above, and no other indexes (not realistic obviously), we could guesstimate:
Student.sID, College.cName
This may result in an efficient backwards scan on College starting from 'Cornell', but Apply would need to be joined with a hash or a naive nested loop (scanning the index each time).
The index on Student would mean an efficient nested loop with an index seek.
Student.sID, Student.GPA
Is this one index or two? If it's two separate indexes, the second will be used, and the first is obviously going to be useless. Apply and College will still need heavy joins.
Apply.cName, College.cName
This would probably get you a merge-join on those two columns, but Student would need a big join.
Apply.sID, Student.GPA
Student could be efficiently scanned from 1.5, and Apply could be seeked, but College requires a big join.
Of these options, the first or the last is probably better, but it's very hard to say without further info.
In a real system, I would have indexes on all tables, and use INCLUDE columns wisely in order to avoid key-lookups. You would want to try to get a better feel for which tables are the ones that need to be filtered early etc.
First question
A hash-index is not linearly-searchable (see Slide 7), that is, you cannot perform range-comparisons with a hash-index. This is because (in general terms) hash functions are one-way: given the output of a hash function you cannot determine the input, and the output will be in apparently random order (having a random order is good for ensuring an even load over the set of hashtable bins).
Now, for a contrived and oversimplified example:
Supposing you have these rows:
PK | Enrollment
----------------
1 | 1
2 | 10
3 | 100
4 | 1000
5 | 10000
A perfect hash index of this table would look something like this:
Assuming that the hash of 1 is 0xF822AA896F34253E and the hash of 10 is 0xB383A8BBDAA41F98, and so on...
EnrollmentHash | PhysicalRowPointer
---------------------------------------
0xF822AA896F34253E | 1
0xB383A8BBDAA41F98 | 2
0xA60DCD4E78869C9C | 3
0x49B0AF769E6B1EB3 | 4
0x724FD1728666B90B | 5
So given this hashtable index, looking at the hashes you cannot determine which hash represents larger enrollment values vs. smaller values. But a hashtable index does give you O(1) lookup for single specific values, which is why it works best for discrete, non-continuous, data values, especially columns used in JOIN criteria.
Whereas a tree-hash does preserve relative ordering information about values, but with O( log n ) lookup time.
Second question
First, I need to rewrite the query to use modern JOIN syntax. The old style (using commas) has been obsolete since SQL-92 in 1992, that's almost 30 years ago.
SELECT
*
FROM
Apply
INNER JOIN Student ON Student.sID = Apply.sID
INNER JOIN College ON Apply.cName = Apply.cName
WHERE
Student.GPA > 1.5
AND
College.cName < 'Cornell'
Now, generally speaking the best way to answer this kind of question would be to know what the STATISTICS (cardinality, value distribution, etc) of the tables are. But without that I can still make some guesses.
I assume that College is the smallest table (~500 rows?), Student will have maybe 1-2m rows, and assuming every Student makes 4-5 applications then the Apply table will have ~5m rows.
...armed with that inference, we can deduce:
Student.sID = Apply.sID is an ID match - so a hash-index would be better in most cases (excepting if the PK clustering matters, but I won't digress).
Student.GPA > 1.5 - this is a range search so having a tree-based index here helps.
College.cName < 'Cornell' - again, this is a range comparison so a tree-based index here helps too.
So the best indexes would be Student.GPA and College.cName, but that isn't an option - so let's see what the benefits of each option are...
(As I was writing this, I saw that #charlieface posted their answer which already covers this, so I'll just link to theirs to save my time: https://stackoverflow.com/a/67829326/159145 )
Suppose there are several docs having one of the fields clientID, values from ranging 1 to 100.
Query 1:
FQ: **clientID:1 OR clientID:2 OR clientID:3 or clientID:5 or clientID:7 or client ID:8**
Query 2:
FQ: **clientID:[1 TO 3] or clientID:5 or clientID:[7 TO 8]**
Question:
Will there be a big performance difference between these two queries? If yes, how?
Doesn't SOLR do the preprocessing of translating such range values if given in multiple ORs?
There might be - depending on cached entries, etc. The second query will be two range queries and a regular query combined into three boolean clauses, while the first one will be six different boolean clauses.
Speed probably won't differ too much for your example, but as the number of clauses grow, the latter will keep the number of sets to be intersected lower than the first one. To get exact data - try it out - your core will be different from other people's cores.
And no, Solr won't preprocess anything. That's handed over to Lucene to do as it pleases, but a range query can be resolved in a different way than a exact field query. There can be entries between the terms given in your pure boolean query, so you can't translate it into a range query and expect the same result, and you can't do it the other way around either - since the field may not be integer (and even integer types differ in how they're being indexed).
The important part is usually that the fq will be cached separately, so it's usually more important to keep it re-usable across queries.
If you use the default numeric types, Solr index more than one precision for each number, (look for trieIntField and IntPointField in Solr field types
so, when when you index a 15, it index it as 15 and as 10, and when you index a 9 it index it as a 9 and as 0. When you search for a 8 - 21 range, it converts the search to a number[8] or number[9] or number[10] or number[20] or number[21]
(with binary ranges instead of decimal, but I hope you get the idea). So I suggest you use the range queries and let Solr manage the optimizations.
PointField types are the replacement for TrieFields, functionally are similar but use another data structures to store the information. So if you have a legacy index you can use the triefields, but if you are making new ones the PointFields are recommended.
According to https://cwiki.apache.org/confluence/display/solr/Faceting I can use facet.offset and facet.limit to paginate.
I think these are analogous to start and rows for normal query results.
However, wouldn't this be very slow if I have too many facet results? According to https://cwiki.apache.org/confluence/display/solr/Pagination+of+Results:
When you wish to fetch a very large number of sorted results from Solr
to feed into an external system, using very large values for the start
or rows parameters can be very inefficient. Pagination using start
and rows not only require Solr to compute (and sort) in memory all of
the matching documents that should be fetched for the current page,
but also all of the documents that would have appeared on previous
pages.
So for deep paging on normal queries, I'd use a cursorMark instead.
So
1) Am I right that deep paging on facet results using facet.offset has the same performance conerns as the quote above?
2) Is there something like cursorMark or other more efficient deep paging for facet results instead of facet.offset?
Yes, if you will take a look into one of the FacetCollector implementation, you will see something like this:
#Override
public boolean collect(BytesRef term, int count) {
if (count > min) {
// NOTE: we use c>min rather than c>=min as an optimization because we are going in
// index order, so we already know that the keys are ordered. This can be very
// important if a lot of the counts are repeated (like zero counts would be).
spare.copyUTF8Bytes(term);
queue.add(new SimpleFacets.CountPair<>(spare.toString(), count));
if (queue.size()>=maxsize) min=queue.last().val;
}
return false;
}
and a little bit above:
maxsize = limit>0 ? offset+limit : Integer.MAX_VALUE-1;
which basically leads to the same problem as for deep paging. The code will create a huge BoundedTreeSet (cause maxsize is determined by sum of offset and limit), and complexity will be around the same as in deep paging scenario.
However, most of the time, I do not expect anybody to have array of facet values larger than 10_000 (got it from the top of my head, probably even less), which shouldn't cause any troubles (until you get millions of facet values).
Usually facets are coming from fields with limited semantics (brand, color, state, department, etc.) and usually these values are limited.
As a summary: algorithm is the same as in collecting matched documents, but the nature of the facet values should save us from the problem.
For a specific facet field of our Solr documents, it would make way more sense to be able to sort facets by their relative "interesting-ness" i.e. their tf-idf score, rather than by popularity. This would make it easy to automatically get rid of unwanted common English words, as both their TF and DF would be high.
When a query is made, TF should be calculated, using all the documents that participate in teh results list.
I assume that the only problem with this approach would be when no query is made, resp., when one searches for ":". Then, no term will prevail over the others in terms of interestingness. Please, correct me if I am wrong here.
Anyway,is this possible? What other relative measurements of "interesting-ness" would you suggest?
facet.sort
This param determines the ordering of the facet field constraints.
count - sort the constraints by count (highest count first) index - to
return the constraints sorted in their index order (lexicographic by
indexed term). For terms in the ascii range, this will be
alphabetically sorted. The default is count if facet.limit is greater
than 0, index otherwise.
Prior to Solr1.4, one needed to use true instead of count and false
instead of index.
This parameter can be specified on a per field basis.
It looks like you couldn't do it out of the box without some serious changes on client side or in Solr.
This is a very interesting idea and I have been searching around for some time to find a solution. Anything new in this area?
I assume that for facets with a limited number of possible values, an interestingness-score can be computed on the client side: For a given result set based on a filter, we can exclude this filter for the facet using the local params-syntax (!tag & !ex) Local Params - On the client side, we can than compute relative compared to the complete index (or another subpart of a filter). This would probably not work for result sets build by a query-parameter.
However, for an indexed text-field with many potential values, such as a fulltext-field, one would have to retrieve df-counts for all terms. I imagine this could be done efficiently using the terms component and probably should be cached on the client-side / in memory to increase efficiency. This appears to be a cumbersome method, however, and doesn't give the flexibility to exclude only certain filters.
For these cases, it would probably be better to implement this within solr as a new option for facet.sort, because the information needed is easily available at the time facet counts are computed.
There has been a discussion about this way back in 2009.
Currently, with the larger flexibility of facet.json, e.g. sorting on stats-facets (e.g. avg(price)) of another field, I guess this could be implemented as an additional sort-option. At least for facets of type term, the result-count (df for current result-set) only needs to be divided by the df of that term for the index (docfreq). If the current result-set is the complete index, facets should be sorted by count.
I will probably implement a workaround in the client for fields with a fixed and rather small vocabulary, e.g. based on a second, cashed query on the complete index. However, for term-fields and similar this might not scale.
In boolean retrieval model query consist of terms which are combined together using different operators. Conjunction is most obvious choice at first glance, but when query length growth bad things happened. Recall dropped significantly when using conjunction and precision dropped when using disjunction (for example, stanford OR university).
As for now we use conjunction is our search system (and boolean retrieval model). And we have a problem if user enter some very rare word or long sequence of word. For example, if user enters toyota corolla 4wd automatic 1995, we probably doesn't have one. But if we delete at least one word from a query, we have such documents. As far as I understand in Vector Space Model this problem solved automatically. We does not filter documents on the fact of term presence, we rank documents using presence of terms.
So I'm interested in more advanced ways of combining terms in boolean retrieval model and methods of rare term elimination in boolean retrieval model.
It seems like the sky's the limit in terms of defining a ranking function here. You could define a vector where the wi are: 0 if the ith search term doesn't appear in the file, 1 if it does; the number of times search term i appears in the file; etc. Then, rank pages based on e.g. Manhattan distance, Euclidean distance, etc. and sort in descending order, possibly culling results with distance below a specified match tolerance.
If you want to handle more complex queries, you can put the query into CNF - e.g. (term1 or term2 or ... termn) AND (item1 or item2 or ... itemk) AND ... and then redefine the weights wi accordingly. You could list with each result the terms that failed to match in the file... so that the users would at least know how good a match it is.
I guess what I'm really trying to say is that to really get an answer that works for you, you have to define exactly what you are willing to accept as a valid search result. Under the strict interpretation, a query that is looking for A1 and A2 and ... Am should fail if any of the terms is missing...