I have a graph database with 5M of nodes and 10M of relationships.
I'm on a Macbook Pro with 4GB RAM. I have already try to adjust java heap size and neo4j memory without success.
My problem is that i have a simply cypher query like that :
MATCH (pet:Pet {id:52163})-[r:FOLLOWS]->(friend)
MATCH (friend)-[r:POSTED]->(n)
RETURN friend.id, TYPE(r),LABELS(n),n.id
LIMIT 30;
This query takes 100ms , which is impressive. But when i add an "ORDER BY" this query takes a long time => 8s :/
MATCH (pet:Pet {id:52163})-[r:FOLLOWS]->(friend)
MATCH (friend)-[r:POSTED]->(n)
RETURN friend.id, TYPE(r),LABELS(n),n.id
ORDER BY r.date DESC
LIMIT 30;
Does Someone has an idea ?
You might want to consider relationship indexes to speed up your query. The date property could be indexed this way. You're using the ORDER BY keyword which will almost always make your query slower as it needs to iterate the entire result set to perform the ordering.
Also consider using a single MATCH statement if that suits your needs:
MATCH (pet:Pet {id:52163})-[r:FOLLOWS]->(friend)-[r:POSTED]->(n)
Related
I have a Postgres 10 database in my Flask app. I'm trying to paginate the filtering results on table over milions of rows. The problem is, that paginate method do counting total number of query results totaly ineffective.
Heres the example with dummy filter:
paginate = Buildings.query.filter(height>10).paginate(1,10)
Under the hood if perform 2 queries:
SELECT * FROM buildings where height > 10
SELECT count(*) FROM (
SELECT * FROM buildings where height > 10
)
--------
count returns 200,000 rows
The problem is that count on raw select without subquery is quite fast ~30ms, but paginate method wraps that into subquery that takes ~30s.
The query plan on cold database:
Is there an option of using default paginate method from flask-sqlalchemy in performant way?
EDIT:
To get the better understanding of my problem here is the real filter operations used in my case, but with dummy field names:
paginate = Buildings.query.filter_by(owner_id=None).filter(Buildings.address.like('%A%')).paginate(1,10)
So the SQL the ORM produce is:
SELECT count(*) AS count_1
FROM (SELECT foo_column, [...]
FROM buildings
WHERE buildings.owner_id IS NULL AND buildings.address LIKE '%A%' ) AS anon_1
That query is already optimized by indices from:
CREATE INDEX ix_trgm_buildings_address ON public.buildings USING gin (address gin_trgm_ops);
CREATE INDEX ix_buildings_owner_id ON public.buildings USING btree (owner_id)
The problem is just this count function, that's very slow.
So it looks like a disk-reading problem. The solutions would be get faster disks, get more RAM is it all can be cached, or if you have enough RAM than to use pg_prewarm to get all the data into the cache ahead of need. Or try increasing effective_io_concurrency, so that the bitmap heap scan can have more than one IO request outstanding at a time.
Your actual query seems to be more complex than the one you show, based on the Filter: entry and based on the Row Removed by Index Recheck: entry in combination with the lack of Lossy blocks. There might be some other things to try, but we would need to see the real query and the index definition (which apparently is not just an ordinary btree index on "height").
I am a beginner at Mongo and I made a data base with the following topology.
Some fields of metadata and one field that contain the experiment results.
experiment results- vector of integers with ~150,000 values
status = db.DataTest.insert_one(
{
"person_num" : num,
"life_cycle" : cycle,
"other_metadata" : meta_data,
"results_of_experiment": big_array
}
)
I inserted something like 7500 of those documents
Its occupied 8GB of memory and work really slowly for find operations.
I don't need those experiment results to search by them only the option to retrieve them from the DB as chunk of data.
Is there another solution to store on the DB the experiment results?
Is using "gridfs" is relevant to this case and not too complicated?
Based on your comments, the most common query is
db.DataTest.find( { "life_cycle": { $gt: 800 } }).limit(5)
Without an index on the life_cycle field, MongoDB is forced to do a collection scan. That is, fetch & evaluate all documents in your collection one by one. In a large collection, this will take a long time.
MongoDB does not create indexes automatically. You would have to observe your most common queries, and create indexes to support those queries. As far as I know, there is no automatic index creation in any database software; SQL, NoSQL, or otherwise.
Database indexing is a deep subject and cannot be explained in a short answer.
Having said that, if you create an index on the life_cycle field, it should improve your query times but only for the query you posted above. Other query types would likely require different indexes. You can do so in the mongo shell:
db.DataTest.createIndex({life_cycle: 1})
I encourage you to read these pages to understand more about indexing in MongoDB:
https://docs.mongodb.com/manual/indexes/
https://docs.mongodb.com/manual/applications/indexes/
https://docs.mongodb.com/manual/tutorial/create-indexes-to-support-queries/
I have a situation where I have to search a document in Solr with multiple OR keywords. Now the number of keywords may lead up to 5000 which is resulting in a awfully large query with 5000 OR conditions. This is resulting in the Solr server to hang. Is there any other way I can design the query to work. Short sample of the query is given below
tweet_id:337931022601699328 OR 337931064293081089 OR 337931089538584576 OR 337931098761871361 OR 337931138851016704 OR 337931143099854848 OR 337931160082591745 OR 337931163857453056 OR 337931230819516416 OR 337931239996665857 OR 337931287518126080 OR 337931322850951168 OR 337931325648535553 OR 337931331398934528 OR 337931413057830912 OR 337931442363441152 OR 337931448629731329 OR 337931453344129025 OR 337931465016877056 OR 337931482066726912 OR 337931514388029442 OR 337931533149155328 OR 337931645527130114 OR 337931704935256064 OR 337931784459268096 OR 337931845545103360 OR 337931889086185472 OR 337931892668108801 OR 337931963983855617 OR 337932154212319233 OR 337932176454721536 OR 337932193198374912 OR 337932229659459584 OR 337932437290090496 OR 337932436807749632 OR 337932436828725250 OR 337932437449474048 OR 337932448518250496 OR 337932458832035843 OR 337932458634915840 OR 337932458278387712 OR 337932474246119425 OR 337932476209041409 OR 337932477408620544 OR 337932480478842880 OR 337932478775959554 OR 337932480566931456 OR 337932478763376640 OR 337932481841999872 OR 337932479337992192 OR 337932479296045057 OR 337932479333797889 OR 337932484614434816 OR 337932484606038017 OR 337932482777317376 OR 337932484664758272 OR 337932482785718273 OR 337932484589273088 OR 337932487399444481 OR 337932489031032833 OR 337932489114923008 OR 337932486573166592 OR 337932490704560130 OR 337932489144270848 OR 337932488762601472 OR 337932492097069056 OR 337932497780355072 OR 337932498900230144 OR 337932499722321921 OR 337932514431729665 OR 337932561806409731 OR 337932567284154368 OR 337932567300935680 OR 337932574603214848 OR 337932571134533632 OR 337932574674518016 OR 337932575484026881 OR 337932578206121984 OR 337932582215892994 OR 337932586653454336 OR 337932584917024768 OR 337932592986865664 OR 337932597017587712 ....
I intend to facet the result based on a few fields.
I'm not sure whether this solution would help you or not, but tried something for your problem.
Whatever the query you provide to Solr, first it parses that query to it's understandable format. Then Solr executes that for result. You have to do some calculations before querying to Solr. Let's take the following scenario to solve your use case.
Suppose You have total 5000 tweet_id. You have to do an OR query on around 4000 tweet_id. In this type of scenario, it's better to query on other (5000-4000=1000) 1000 tweet_id with negation AND query. So, your query will have less values passed.
So, try querying with rest of the tweet_id with negation AND query instead of OR query.
If I were you, I'd create a new field denoting this custom_list_id .. Whenever you generate a new list, index the new data then query by the list I'd.
Does anybody have experience with getting random results from index with +100,000,000 (100 million) records.
The goal is getting 30 results ordered by random, at least 100 times per second.
Actually my records are in MySQL but selecting ORDER BY RAND() from huge tables is the most easiest way to kill MySQL.
Sphinxsearch or whatever what do you recommend?
I dont have that big an index to try.
barry#server:~/modules/sphinx-2.0.1-beta/api# time php test.php -i gi_stemmed --sortby #random --select id
Query '' retrieved 20 of 3067775 matches in 0.081 sec.
Query stats:
Matches:
<SNIP>
real 0m0.100s
user 0m0.010s
sys 0m0.010s
This is on a reasonably powerful dedicated server - that is serving live queries (~20qps)
But to be honest if you dont need filtering (ie each query has a 'WHERE' clause), you can just setup a system that returns random results - can do this with mysql. Just using ORDER BY RAND() is evil (and sphinx while better at sorting than mysql is still doing basically the same thing).
How 'sparse' is your data? If most of your ids are used, can just do soemthing like
$ids = array();
$max = getOne("SELECT MAX(id) FROM table");
foreach(range(1,30) as $idx) {
$ids[] = rand(1,$max);
}
$query = "SELECT * FROM table WHERE id IN (".implode(',',$ids).")";
(may want to use shuffle() in php on the results afterwards as you likly to get the results out of mysql in id order)
Which will be much more efficient. If you do have holes, perhaps just lookup 33 rows. Sometimes will get more than need, (just discard), but you should still get 30 most of the times.
(Of course you could cache the '$max' somewhere, so it doesnt have to be looked up all the time.)
Otherwise you could setup a dedicated 'shuffled' list. Basically a FIFO buffer, have one thread, filling it with random results (perhaps using the above system, using 3000 ids at a time) and then the consumers just read random results directly out of this queue.
FIFO, is not particully easy to implement with mysql, so maybe use a different system - maybe redis, or even just memcache.
I executed some query like "Address:Jack*". It show numFound = 5214 and display 100 documents in results page(I changed default display results from 10 to 100).
How can I get all documents.
I remember myself doing &rows=2147483647
2,147,483,647 is integer's maximum value. I recall using a number bigger than that once and having a NumberFormatException because it couldn't be parsed into an int. I don't know if they use Long nowadays, but 2 billion rows is normally more than enough.
Small note:
Be careful if you are planning to do this in production. If you do a query like * : * and your index is big, you could transferring a couple of gigabytes in that query.
If you know you won't have many docs, go ahead and use integer's max value.
On the other hand, if you are doing a one-time script and just need to dump all results (for example document ID's) then this approach is valid, if you don't mind waiting 3-5 minutes for a query to return.
Don't use &rows=2147483647
Don't use Integer.MAX_VALUE(2147483647) as value of rows in production. This will heavily slow down your query even if you have a small resultset, because solr preallocates a queue in this size. see https://issues.apache.org/jira/browse/SOLR-7580
I strongly suggest to use Exporting Result Sets
It’s possible to export fully sorted result sets using a special rank query parser and response writer specifically designed to work together to handle scenarios that involve sorting and exporting millions of records.
Or I suggest to use Deep Paging.
Simple Pagination is a easy thing when you have few documents to read and all you have to do is play with start and rows parameters. But this is not a feasible way when you have many documents, I mean hundreds of thousands or even millions.
This is the kind of thing that could bring your Solr server to their knees.
For typical applications displaying search results to a human user,
this tends to not be much of an issue since most users don’t care
about drilling down past the first handful of pages of search results
— but for automated systems that want to crunch data about all of the
documents matching a query, it can be seriously prohibitive.
This means that if you have a website and are paging search results, a real user do not go so further but consider on the other hand what can happen if a spider or a scraper try to read all the website pages.
Now we are talking of Deep Paging.
I’ll suggest to read this amazing post:
https://lucidworks.com/post/coming-soon-to-solr-efficient-cursor-based-iteration-of-large-result-sets/
And take a look at this document page:
https://solr.apache.org/guide/pagination-of-results.html
And here is an example that try to explain how to paginate using the cursors.
SolrQuery solrQuery = new SolrQuery();
solrQuery.setRows(500);
solrQuery.setQuery("*:*");
solrQuery.addSort("id", ORDER.asc); // Pay attention to this line
String cursorMark = CursorMarkParams.CURSOR_MARK_START;
boolean done = false;
while (!done) {
solrQuery.set(CursorMarkParams.CURSOR_MARK_PARAM, cursorMark);
QueryResponse rsp = solrClient.query(solrQuery);
String nextCursorMark = rsp.getNextCursorMark();
for (SolrDocument d : rsp.getResults()) {
...
}
if (cursorMark.equals(nextCursorMark)) {
done = true;
}
cursorMark = nextCursorMark;
}
Returning all the results is never a good option as It would be very slow in performance.
Can you mention your use case ?
Also, Solr rows parameter helps you to tune the number of the results to be returned.
However, I don't think there is a way to tune rows to return all results. It doesn't take a -1 as value.
So you would need to set a high value for all the results to be returned.
What you should do is to first create a SolrQuery shown below and set the number of documents you want to fetch in a batch.
int lastResult=0; //this is for processing the future batch
String query = "id:[ lastResult TO *]"; // just considering id for the sake of simplicity
SolrQuery solrQuery = new SolrQuery(query).setRows(500); //setRows will set the required batch, you can change this to whatever size you want.
SolrDocumentList results = solrClient.query(solrQuery).getResults(); //execute this statement
Here I am considering an example of search by id, you can replace it with any of your parameter to search upon.
The "lastResult" is the variable you can change after execution of the first 500 records(500 is the batch size) and set it to the last id got from the results.
This will help you execute the next batch starting with last result from previous batch.
Hope this helps. Shoot up a comment below if you need any clarification.
For selecting all documents in dismax/edismax via Solarium php client, the normal query syntax : does not work. To select all documents set the default query value in solarium query to empty string. This is required as the default query in Solarium is :. Also set the alternative query to :. Dismax/eDismax normal query syntax does not support :, but the alternative query syntax does.
For more details following book can be referred
http://www.packtpub.com/apache-solr-php-integration/book
As the other answers pointed out, you can configure the rows to be max integer to yield back all the results for a query.
I would recommend though to use Solr feature of pagination, and build a function that will return for you all the results using the cursorMark API. The gist of it is you set the cursorMark parameter to '*', you set the page size(rows parameter), and on each result you'll get a cursorMark for the next page, so you execute the same query only with the cursorMark given from the last result. This way you'll have more flexibility on how much of the results you want back, in a much more performant way.
The way I dealt with the problem is by running the query twice:
// Start with your (usually small) default page size
solrQuery.setRows(50);
QueryResponse response = solrResponse(query);
if (response.getResults().getNumFound() > 50) {
solrQuery.setRows(response.getResults().getNumFound());
response = solrResponse(query);
}
It makes a call twice to Solr, but gets you all matching records....with the small performance penalty.
query.setRows(Integer.MAX_VALUE);
works for me!!