I have labels Person and Company with millions of nodes.
I am trying to create a relationship:
(person)-[:WORKS_AT]->(company) based on a unique company number property that exists in both labels.
I am trying to do that with the following query:
MATCH (company:Company), (person:Person)
WHERE company.companyNumber=person.comp_number
CREATE (person)-[:WORKS_AT]->(company)
but the query takes too long to execute and eventually fails.
I have indexes on companyNumber and comp_number.
So, my question is: it there a way to create the relationships by segments, for example (50000, then another 50000 etc...)?
Use a temporary label to mark things as completed, and add a limit step before creating the relationship. When you are all done, just remove the label from everyone.
MATCH (company:Company)
WITH company
MATCH (p:Person {comp_number: company.companyNumber} )
WHERE NOT p:Processed
WITH company, p
LIMIT 50000
MERGE (p) - [:WORKS_AT] -> (company)
SET p:Processed
RETURN COUNT(*) AS processed
That will return the number (usually 50000) of rows that were processed; when it returns less than 50000 (or whatever you set the limit to), you are all done. Run this guy then:
MATCH (n:Processed)
WITH n LIMIT 50000
REMOVE n:Processed
RETURN COUNT(*) AS processed
until you get a result less than 50000. You can probably turn all of these numbers up to 100000 or maybe more, depending on your db setup.
Related
I need to build one MSSQL query that selects one row that is the best match.
Ideally, we have a match on street, zip code and house number.
Only if that does not deliver any results, a match on just street and zip code is sufficient
I have this query so far:
SELECT TOP 1 * FROM realestates
WHERE
(Address_Street = '[Street]'
AND Address_ZipCode = '1200'
AND Address_Number = '160')
OR
(Address_Street = '[Street]'
AND Address_ZipCode = '1200')
MSSQL currently gives me the result where the Address_Number is NOT 160, so it seems like the 2nd clause (where only street and zipcode have to match) is taking precedence over the 1st. If I switch around the two OR clauses, same result :)
How could I prioritize the first OR clause, so that MSSQL stops looking for other results if we found a match where the three fields are present?
The problem here isn't the WHERE (though it is a "problem"), it's the lack of an ORDER BY. You have a TOP (1), but you have nothing that tells the data engine which row is the "top" row, so an arbitrary row is returned. You need to provide logic, in the ORDER BY to tell the data engine which is the "first" row. With the rudimentary logic you have in your question, this would like be:
SELECT TOP (1)
{Explicit Column List}
realestates
WHERE Address_Street = '[Street]'
AND Address_ZipCode = '1200'
ORDER BY CASE Address_Number WHEN '160' THEN 1 ELSE 2 END;
You can't prioritize anything in the WHERE clause. It always results in ALL the matching rows. What you can do is use TOP or FETCH to limit how many results you will see.
However, in order for this to be effective, you MUST have an ORDER BY clause. SQL tables are unordered sets by definition. This means without an ORDER BY clause the database is free to return rows in any order it finds convenient. Mostly this will be the order of the primary key, but there are plenty of things that can change this.
I have a Postgres 10 database in my Flask app. I'm trying to paginate the filtering results on table over milions of rows. The problem is, that paginate method do counting total number of query results totaly ineffective.
Heres the example with dummy filter:
paginate = Buildings.query.filter(height>10).paginate(1,10)
Under the hood if perform 2 queries:
SELECT * FROM buildings where height > 10
SELECT count(*) FROM (
SELECT * FROM buildings where height > 10
)
--------
count returns 200,000 rows
The problem is that count on raw select without subquery is quite fast ~30ms, but paginate method wraps that into subquery that takes ~30s.
The query plan on cold database:
Is there an option of using default paginate method from flask-sqlalchemy in performant way?
EDIT:
To get the better understanding of my problem here is the real filter operations used in my case, but with dummy field names:
paginate = Buildings.query.filter_by(owner_id=None).filter(Buildings.address.like('%A%')).paginate(1,10)
So the SQL the ORM produce is:
SELECT count(*) AS count_1
FROM (SELECT foo_column, [...]
FROM buildings
WHERE buildings.owner_id IS NULL AND buildings.address LIKE '%A%' ) AS anon_1
That query is already optimized by indices from:
CREATE INDEX ix_trgm_buildings_address ON public.buildings USING gin (address gin_trgm_ops);
CREATE INDEX ix_buildings_owner_id ON public.buildings USING btree (owner_id)
The problem is just this count function, that's very slow.
So it looks like a disk-reading problem. The solutions would be get faster disks, get more RAM is it all can be cached, or if you have enough RAM than to use pg_prewarm to get all the data into the cache ahead of need. Or try increasing effective_io_concurrency, so that the bitmap heap scan can have more than one IO request outstanding at a time.
Your actual query seems to be more complex than the one you show, based on the Filter: entry and based on the Row Removed by Index Recheck: entry in combination with the lack of Lossy blocks. There might be some other things to try, but we would need to see the real query and the index definition (which apparently is not just an ordinary btree index on "height").
I have a question in regards to preparing my dataset for research.
I have a dataset in SPSS 20 in long format as I am researching on individual level over multiple years. However some individuals were added twice to my dataset because there were differences in some variables matched to those individuals (5000 individuals with 25 variables per individual). I would like to merge those duplicates so that I can run my analysis over time. For those variables that differ between the duplicates I would like spss to make additional variables when all the duplicates are merged.
Is this at all possible and if yes HOW?
I suggest following steps>
create auxiliary variable "PrimaryLast" with procedure Data->Identify Duplicate Cases by... , set "Define matching cases by" to your case ID
create 2 new auxiliary datasets with Data->Select Cases with condition "PrimaryLast = 0" and "PrimaryLast = 1" and selection "Copy selected cases to new dataset"
merge both auxiliary datasets with procedure Data -> Merge Files-> Add Variables, rename duplicated variable names in left box and move them in right box and select your case ID as key
don't forget to control if you made "full outer join", in case you lost non-duplicated cases and have only duplicated cases in your dataset, just merge datasets from step 2. in different order in step 3.
Try this:
sort cases by caseID otherVar.
compute ind=1.
if $casenum>1 and caseID=lag(caseID) ind=lag(ind)+1.
casestovars /id=caseID /index=ind.
If a caseID is repeated more then once, after restructure there will be only one line for that case, while all the variables will be repeated with indexes.
If the order of the caseID repeats, replace the otherVar in the sort command with the corresponding variable (e.g. date). This way your new variables will also be indexed accordingly.
I'm using the lsqlite3 lua wrapper and I'm making queries into a database. My DB has ~5million rows and the code I'm using to retrieve rows is akin to:
db = lsqlite3.open('mydb')
local temp = {}
local sql = "SELECT A,B FROM tab where FOO=BAR ORDER BY A DESC LIMIT N"
for row in db:nrows(sql) do temp[row['key']] = row['col1'] end
As you can see I'm trying to get the top N rows sorted in descending order by FOO (I want to get the top rows and then apply the LIMIT not the other way around). I indexed the column A but it doesn't seem to make much of a difference. How can I make this faster?
You need to index the column on which you filter (i.e. with the WHERE clause). THe reason is that ORDER BY comes into play after filtering, not the other way around.
So you probably should create an index on FOO.
Can you post your table schema?
UPDATE
Also you can increase the sqlite cache, e.g.:
PRAGMA cache_size=100000
You can adjust this depending on the memory available and the size of your database.
UPDATE 2
I you want to have a better understanding of how your query is handled by sqlite, you can ask it to provide you with the query plan:
http://www.sqlite.org/eqp.html
UPDATE 3
I did not understand your context properly with my initial answer. If you are to ORDER BY on some large data set, you probably want to use that index, not the previous one, so you can tell sqlite to not use the index on FOO this way:
SELECT a, b FROM foo WHERE +a > 30 ORDER BY b
I am trying to retrieve the number of rows in a table but no matter the number i always get 1 as the result.
Here is the code:
UpdateData(TRUE);
CDatabase database;
CString connectionstring, sqlquery, Slno,size,portno,header,id;
connectionstring=TEXT("Driver={SQL NATIVE CLIENT};SERVER=CYBERTRON\\SQLEXPRESS;Database=packets;Trusted_Connection=Yes" );
database.Open(NULL, FALSE, FALSE, connectionstring);
CRecordset set(&database);
sqlquery.Format(TEXT("select * from allpacks;"));
set.Open(CRecordset::forwardOnly, sqlquery, NULL);
int x=set.GetRecordCount();
CString temp;
temp.Format("%d",x);
AfxMessageBox(temp);
;
Did you read the documentation for GetRecordCount()?
The record count is maintained
as a "high water mark," the
highest-numbered record yet seen as
the user moves through the records.
The total number of records is only
known after the user has moved beyond
the last record. For performance
reasons, the count is not updated when
you call MoveLast. To count the
records yourself, call MoveNext
repeatedly until IsEOF returns
nonzero. Adding a record via
CRecordset:AddNew and Update increases
the count; deleting a record via
CRecordset::Delete decreases the
count.
You're not moving through the rows.
Now, if you actually tried to count rows in one of my tables that way, I'd hunt you down and poke you in the eye with a sharp stick. Instead, I'd usually expect you to use SQL like this:
select count(*) num_rows from allpacks;
That SQL statement will always return one row, having a single column named "num_rows".