NPGSQL CURSOR MOVE ALL getting affected rows (or row count) - npgsql

I am trying to get the "rows affected" count from the following query using npgsql:
DECLARE cursor SCROLL CURSOR FOR SELECT * FROM public."table";
MOVE ALL IN cursor;
Using PgAdmin's SQL Editor, running this query gives: "Query returned successfully: 5736 rows affected, 31 msec execution time."
Using npgsql:
var transaction = conn.BeginTransaction();
NpgsqlCommand command = new NpgsqlCommand("DECLARE cursor SCROLL CURSOR FOR SELECT * FROM public.\"PARTIJ\"; MOVE ALL IN cursor", conn);
var count = command.ExecuteNonQuery();
// I valided here the cursor did move to end of result -- so cursor is working.
transaction.Commit();
I was expecting 5736, but count equals -1. Can I get the same rows affected count as PgAdmin does using npgsql?

This is probably happening because you're trying to get the affected row count of a multistatement command - your first statement creates the cursor, and the second one actually moves it (although I'm not sure what "rows affected" would mean when simply moving a cursor, as opposed to fetching). Try to send your statements in two different commands and get the affected row of the second.
All that aside, any particular reason for using cursors here and not just doing SELECT COUNT(*) FROM public."table"?

Related

Optimize SQL UPDATE statement/SqlCommand

I have the following statement:
UPDATE Table SET Column=Value WHERE TableID IN ({0})
I have a comma delimited list of TableIDs that can be pretty lengthy(for replacing {0}). I've found that this is faster than using a SqlDataAdapter, however I also noticed that if the command text is too long, the SqlCommand might perform poorly.
Any ideas?
This is inside of a CLR trigger. Each SqlCommand execution incurs some sort of overhead. I've determined that the above command is better than SqlDataAdapter.Update() because Update() will update individual records incurring several SQL statements to be executed.
...I ended up doing the following(trigger time went from .7 to .25 seconds)
UPDATE T SET Column=Value FROM Table T INNER JOIN INSERTED AS I ON (I.TableID=T.TableID)
When there is a long list, the execution plan is probably using an index scan instead of an index seek. In this case, you are probably better off limiting the list to several items, but call the update command repeatedly until all items in the list are accommodated.
Split your list of IDs into batchs maybe. I assume you have the list of id numbers in a collection and you're building up the {0} string. So maybe update 20 or 100 at a time.
Wrap it in a transaction and perform all the updates before calling Commit()
If this is a stored procedue I would use a Table-Valued Parameter. If this is an ad hoc batch then consider populating a temporary table and joining to it in your batch. Your IN-clause is rationalized as a bunch of ORs which can quite easily negate the use of an index. With a JOIN you may get a better plan from the optimizer.
DECLARE #Value VARCHAR(100) = 'Some value';
CREATE TABLE #Table (TableID INT PRIMARY KEY);
INSERT INTO #Table VALUES (1),(2),(3),(n)...;
MERGE INTO Schema.Table AS target
USING #Table AS source
ON target.TableID = source.TableID
WHEN MATCHED THEN UPDATE SET Column = Value;
If you can use a stored procedure, you could use a MERGE statement instead.
MERGE INTO Table AS target
USING #TableIDList AS source
ON target.TableID = source.ID
WHEN MATCHED THEN UPDATE SET Column = source.Value
where #TableIDList is table type sent from code as a table-valued parameter with the IDs (and possibly Values) you need.

Select Into trigger linked to Update command in MSSQL changes the return value of the affected rows - How to avoid this?

We are using Ado library in MS Visual C++ to use MS SQL database in the following way:
_CommandPtr pCmd;
...
pCmd->CommandText = “update …”;
pCmd->Execute( &lRowsAffected, 0, adExecuteNoRecords );
After executing the update command, the lRowsAffected variable gives us the number of affected rows, which is exactly what we want. However, if in MS SQL a trigger starting with a select into command is defined for the update command, we get the number of rows selected by the select into command as the value of lRowsAffected. Instead of this, we would like to know how many rows were affected by the update command, how could we achieve this?

Sql Server Ignore rowlock hint

This is a general question about how to lock range of values (and nothing else!) when they are not exists in table yet. The trigger for the question was that I want to do "insert if not exists", I don't want to use MERGE because I need to support SQL Server 2005.
In the first connection I:
begin transaction
select data from a table using (SERIALIZABLE, ROWLOCK) + where clause to respecify range
wait...
In the second connection, I insert data to the table with values that do not match the where clause in the first connection
I would expect that the second connection won't be affected by the first one, but it finishes only after I commit (or rollback) the first connection's transaction.
What am I missing?
Here is my test code:
First create this table:
CREATE TABLE test
(
VALUE nvarchar(100)
)
Second, open new query window sql server managements studio and execute the following:
BEGIN TRANSACTION;
SELECT *
FROM test WITH (SERIALIZABLE,ROWLOCK)
WHERE value = N'a';
Third, open another new query window and execute the following:
INSERT INTO test VALUES (N'b');
Notice that the second query doesn't ends until the transaction in the first window ends
You are missing an index on VALUE.
Without that SQL Server has nothing to take a key range lock on and will lock the whole table in order to lock the range.
Even when the index is added however you will still encounter blocking with the scenario in your question. The RangeS-S lock doesn't lock the specific range given in your query. Instead it locks the range between the keys either side of the selected range.
When there are no such keys either side the range lock extends to infinity. You would need to add a value between a and b (for example aa) to prevent this happening in your test and the insert of b being blocked.
See Bonus Appendix: Range Locks in this article for more about this.

SQL Server: Maximum number of records to return

We have RowCount to set number of records to return or affect. But we need to set this each time before executing sql statement. I am looking a way to set this once my application started and so it affect all queries execute via my application.
Update
I want to achieve a trial version strategy with limited records! I don't want to mess with my actual code and wish somehow database restrict number of records return in result of any query! An alternative could be pass a parameter to each store-procedure but I really don't like this one and looking for some other strategy!
You can parameterise TOP in your code for all commands
DECLARE #rows int
SET #row = ISNULL(#row, 2000000000)
SELECT TOP (#rows) ... FROM ..
You can have TOP defined globally for all your queries this way, say by wrapping or extending SQLCommand
SET ROWCOUNT is not a safe option: it affects intermediate result sets and has other unpredictable behaviour. And it's partially deprecated to ignored in I/U/D DML: see MSDN
Using SET ROWCOUNT will not affect DELETE, INSERT, and UPDATE statements in the next release of SQL Server
You don't mention the application language you are using, so I can't give you the code, but make a wrapper function for a database connect. In this function, after you connect to the database, issue the SET ROWCOUNT n, make n a parameter and add any logic you need to make it variable. I do a similar thing with CONTEXT_INFO, I set it in my connection function.
Remember, you'll need to connect to the database everywhere in your application (use search and replace) using this wrapper function, and as a result, you'll always have the rowcount set.
On application start up, reset RowCount by executing the following command:
command.CommandText = "SET ROWCOUNT 0;";
When the specific mode is activated, set the RowCount to the needed value by executing the following command:
command.CommandText = "SET ROWCOUNT " + rowCount + ";";
When the specific mode is deactivated, reset RowCount again.
Refer to SET ROWCOUNT (Transact-SQL)

Sybase optimizer creates a cursor for a DELETE query which then fails

We are executing the following query using embedded SQL in C:
DELETE archive_table FROM archive_table arc, #arc_chunk loc WHERE arc.col = loc.col
Sybase's response is:
The DELETE WHERE CURRENT OF to the
cursor 'C42' failed because the cursor
is on a join.
The query is bewing constructed as a C string and then executed using EXECUTE IMMEDIATE in embedded SQL.
Is there a way to perform this DELETE without the Sybase optimizer creating a cursor (which fails) to execute it?
When using the delete target table in a from clause don't put an alias on it.
DELETE archive_table
FROM archive_table, #arc_chunk loc
WHERE archive_table.col = loc.col
[EDIT]
Another possibility is to remove the need for a join and fold it all into the where clause.
DELETE archive_table
WHERE EXISTS ( SELECT 1 FROM #arc_chunk loc WHERE archive_table.col = loc.col )
I am assuming that 'col' is the unique key for the row.

Resources