How to retrieve the number of rows in a table - database

I am trying to retrieve the number of rows in a table but no matter the number i always get 1 as the result.
Here is the code:
UpdateData(TRUE);
CDatabase database;
CString connectionstring, sqlquery, Slno,size,portno,header,id;
connectionstring=TEXT("Driver={SQL NATIVE CLIENT};SERVER=CYBERTRON\\SQLEXPRESS;Database=packets;Trusted_Connection=Yes" );
database.Open(NULL, FALSE, FALSE, connectionstring);
CRecordset set(&database);
sqlquery.Format(TEXT("select * from allpacks;"));
set.Open(CRecordset::forwardOnly, sqlquery, NULL);
int x=set.GetRecordCount();
CString temp;
temp.Format("%d",x);
AfxMessageBox(temp);
;

Did you read the documentation for GetRecordCount()?
The record count is maintained
as a "high water mark," the
highest-numbered record yet seen as
the user moves through the records.
The total number of records is only
known after the user has moved beyond
the last record. For performance
reasons, the count is not updated when
you call MoveLast. To count the
records yourself, call MoveNext
repeatedly until IsEOF returns
nonzero. Adding a record via
CRecordset:AddNew and Update increases
the count; deleting a record via
CRecordset::Delete decreases the
count.
You're not moving through the rows.
Now, if you actually tried to count rows in one of my tables that way, I'd hunt you down and poke you in the eye with a sharp stick. Instead, I'd usually expect you to use SQL like this:
select count(*) num_rows from allpacks;
That SQL statement will always return one row, having a single column named "num_rows".

Related

DynamoDB query row number

Can i smhow get index of a query response in DyanmoDB?
[hashKey exists, sortKey exists]
query { KeyCondExp = "hashKey = smthin1", FilterExp = "nonPrimeKey = smthin2" }
I need index of row according to sortKey for that selected document
When a DynamoDB Query request returns an item - in your example chosen by a specific filter - it will return the full item, including the sort key. If that is what you call "the index of row according to sortKey", then you are done.
If, however, by "index" you mean the numeric index - i.e., if the item is the 100th sort key in this partition (hash key), you want to return the number 100 - well, that you can't do. DynamoDB keeps rows inside a partition sorted by the sort key, but not numbered. You can insert an item in the middle of a million-row partition, and it will be inserted in the right place but DynamoDB won't bother to renumber the million-row list just to maintain numeric indexes.
But there is something else you should know. In the query you described, you are using a FilterExpression to return only specific rows out of the entire partition. With such a request, Amazon will charge you for reading the entire partition, not just the specific rows returned after the filter. If you're charged for reading the entire partition, you might as well just read it all, without a filter, and then you can actually count the rows and get the numeric index of the match if that's what you want. Reading the entire partition will cause you more work at the client (and more network traffic), but will not increase your DynamoDB RCU bill.

Neo4j add huge number of relationships to already existing nodes

I have labels Person and Company with millions of nodes.
I am trying to create a relationship:
(person)-[:WORKS_AT]->(company) based on a unique company number property that exists in both labels.
I am trying to do that with the following query:
MATCH (company:Company), (person:Person)
WHERE company.companyNumber=person.comp_number
CREATE (person)-[:WORKS_AT]->(company)
but the query takes too long to execute and eventually fails.
I have indexes on companyNumber and comp_number.
So, my question is: it there a way to create the relationships by segments, for example (50000, then another 50000 etc...)?
Use a temporary label to mark things as completed, and add a limit step before creating the relationship. When you are all done, just remove the label from everyone.
MATCH (company:Company)
WITH company
MATCH (p:Person {comp_number: company.companyNumber} )
WHERE NOT p:Processed
WITH company, p
LIMIT 50000
MERGE (p) - [:WORKS_AT] -> (company)
SET p:Processed
RETURN COUNT(*) AS processed
That will return the number (usually 50000) of rows that were processed; when it returns less than 50000 (or whatever you set the limit to), you are all done. Run this guy then:
MATCH (n:Processed)
WITH n LIMIT 50000
REMOVE n:Processed
RETURN COUNT(*) AS processed
until you get a result less than 50000. You can probably turn all of these numbers up to 100000 or maybe more, depending on your db setup.

SalesForce limit on SOQL?

Using the PHP library for salesforce I am running:
SELECT ... FROM Account LIMIT 100
But the LIMIT is always capped at 25 records. I am selecting many fields (60 fields). Is this a concrete limit?
The skeleton code:
$client = new SforceEnterpriseClient();
$client->createConnection("EnterpriseSandboxWSDL.xml");
$client->login(USERNAME, PASSWORD.SECURITY_TOKEN);
$query = "SELECT ... FROM Account LIMIT 100";
$response = $client->query($query);
foreach ($response->records as $record) {
// ... there's only 25 records
}
Here is my check list
1) Make sure you have more than 25 records
2) after your first loop do queryMore to check if there are more records
3) make sure batchSize is not set to 25
I don't use PHP library for Salesforce. But I can assume that before doing
SELECT ... FROM Account LIMIT 100
some more select queries have been performed. If you don't code them that maybe PHP library does it for you ;-)
The Salesforce soap API query method will only return a finite number of rows. There are a couple of reasons why it may be returning less than your defined limit.
The QueryOptions header batchSize has been set to 25. If this is the case, you could try adjusting it. If it hasn't been explicitly set, you could try setting it to a larger value.
When the SOQL statement selects a number of large fields (such as two or more custom fields of type long text) then Salesforce may return fewer records than defined in the batchSize. The reduction in batch size also occurs when dealing with base64 encoded fields, such as the Attachment.Body. If this is the case they you can just use queryMore with the QueryLocator from the first response.
In both cases, check the done and size properties of the done and size properties of the QueryResult to determine if you need to use queryMore and the total number of rows that match the SOQL query.
To avoid governor limits it might be better to add all the records to a list then do everything you need to do to the records in the list. After you done just update your database using: update listName;

lua and lsqlite3: speeding up select statement

I'm using the lsqlite3 lua wrapper and I'm making queries into a database. My DB has ~5million rows and the code I'm using to retrieve rows is akin to:
db = lsqlite3.open('mydb')
local temp = {}
local sql = "SELECT A,B FROM tab where FOO=BAR ORDER BY A DESC LIMIT N"
for row in db:nrows(sql) do temp[row['key']] = row['col1'] end
As you can see I'm trying to get the top N rows sorted in descending order by FOO (I want to get the top rows and then apply the LIMIT not the other way around). I indexed the column A but it doesn't seem to make much of a difference. How can I make this faster?
You need to index the column on which you filter (i.e. with the WHERE clause). THe reason is that ORDER BY comes into play after filtering, not the other way around.
So you probably should create an index on FOO.
Can you post your table schema?
UPDATE
Also you can increase the sqlite cache, e.g.:
PRAGMA cache_size=100000
You can adjust this depending on the memory available and the size of your database.
UPDATE 2
I you want to have a better understanding of how your query is handled by sqlite, you can ask it to provide you with the query plan:
http://www.sqlite.org/eqp.html
UPDATE 3
I did not understand your context properly with my initial answer. If you are to ORDER BY on some large data set, you probably want to use that index, not the previous one, so you can tell sqlite to not use the index on FOO this way:
SELECT a, b FROM foo WHERE +a > 30 ORDER BY b

gridview row filter

I have gridview named myGridView with 800k rows. One of the columns is named NAME and it can have values Alex (1) where one is the number of current reccord for Alex. When I insert new reccord for Alex I want it to be with NAME value "Alex (n)" where n is the smallest number which is not taken. I think I should do some filter like this: var rows = (all objects in gridview).Select(rows where NAME.IndexOf( "Alex (" ) > -1)
And this will return me all the records for Alex ( some number) and now I have to filter by number I suppose... How to do the exact filter which to return me the smallest number which is not taken yet? Can it be faster?
First, I should mention that the code you have pasted won't work. This is because the grid does not provide the rows collection. Also, even if this code works, it will work very slowly because it will result in filtering of 800k rows on the web server. Don't you think that it is better to request the required information from the DB server, which is optimized to work with such queries, and which will be able to process your request faster?

Resources