How can I get number of deleted records? - sql-server

I have a stored procedure which deletes certain records. I need to get the number of deleted records.
I tried to do it like this:
DELETE FROM OperationsV1.dbo.Files WHERE FileID = #FileID
SELECT ##ROWCOUNT AS DELETED;
But DELETED is shown as 0, though the appropriate records are deleted. I tried SET NOCOUNT OFF; without success. Could you please help?
Thanks.

That should work fine. The setting of NOCOUNT is irrelevant. This only affects the n rows affected information sent back to the client and has no effect on the workings of ##ROWCOUNT.
Do you have any statements between the two that you have shown? ##ROWCOUNT is reset after every statement so you must retrieve the value immediately with no intervening statements.

I use this code snippet when debugging stored procedures to verify counts after operations, such as a delete:
DECLARE #Msg varchar(30)
...
SELECT #Msg = CAST(##ROWCOUNT AS VARCHAR(10)) + ' rows affected'
RAISERROR (#Msg, 0, 1) WITH NOWAIT

START TRANSACTION;
SELECT #before:=(SELECT count(*) FROM OperationsV1.dbo.Files);
DELETE FROM OperationsV1.dbo.Files WHERE FileID = #FileID;
SELECT #after:=(SELECT count(*) FROM OperationsV1.dbo.Files);
COMMIT;
SELECT #before-#after AS DELETED;
I don't know about SQL server, but in MySQL SELECT count(*) FROM ... is an extremely cheap operation.

Related

openquery apears to be rolled back when done

I'm using the following query.
select * from OPENQUERY(EXITWEB,N'SET NOCOUNT ON;
declare #result table (id int);
insert into [system_files] ([is_public], [file_name], [file_size], [content_type], [disk_name], [updated_at], [created_at])
output inserted.id into #result(id)
values (N''1'',N''7349.jpg'',N''146921'',N''image/jpeg'',N''5799dcc8a1eb1413195192.jpg'',N''2016-07-28 10:22:00.000'',N''2016-07-28 10:22:00.000'')
declare #id int = (select top 1 id from #result)
select * from system_files where id = #id
insert into linkToExternal (id, id_ext) values(#id, 47)
--select #id
')
when I perform a select from within the query it works just fine:
But when I go to check my database when the call has finished, the record is no longer there.
So I'm suspecting a transaction is rolled back. My question is: why. What can I do to prevent the transaction to be rolled back if that's the case.
Well, as always, after days of struggling and me post a question on stackoverflow I find the solution: http://www.sqlservercentral.com/Forums/Topic1128997-391-1.aspx#bm1288825
I was having the same problem as you and almost gave up on it but have
finally found an answer to the problem. Reading an article about
sharing data between stored procedures I discovered that OPENQUERY
issues an Implicit Transaction and that it was Rolling back my insert.
So I had to add an explicit Commit to my stored procedures, in
additional I discovered that if I use it in a query that has a Union
it has to be Commited twice. Since I'm doing my insert inside a BEGIN
TRY I can always just commit twice and not worry about whether it is
being used in a UNION. I'm returning different values if there is an
error but that was just apart of my debugging.
SELECT TOP 5 *
FROM mm
JOIN OPENQUERY([LOCALSERVER], 'EXEC cms60.dbo.sp_RecordReportLastRun ''LPS'', ''Test''') RptStats ON 1=1
ALTER PROCEDURE [dbo].[sp_RecordReportLastRun]
-- Add the parameters for the stored procedure here
#LibraryName varchar(50),
#ReportName varchar(50)
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from interfering with SELECT statements.
SET NOCOUNT ON;
-- Insert statements for procedure here
BEGIN TRY
INSERT INTO cms60.dbo.ReportStatistics (LibraryName, ReportName, RunDate) VALUES (#LibraryName, #ReportName, GETDATE())
--
COMMIT; --Needed because OPENQUERY starts an Implicit Transaction but doesn't commit it.
COMMIT; --Need second Commit when used in a UNION and although it throws an error when not used in a UNION doesn't cause a problem.
END TRY
BEGIN CATCH
SELECT 2 Test
END CATCH
SELECT 1 Test
END
In my case, adding a ;COMMIT; after the inserts solved it, and made sure it got written into the database.

How to prevent multi threaded application to read this same Sql Server record twice

I am working on a system that uses multiple threads to read, process and then update database records. Threads run in parallel and try to pick records by calling Sql Server stored procedure.
They call this stored procedure looking for unprocessed records multiple times per second and sometimes pick this same record up.
I try to prevent this happening this way:
UPDATE dbo.GameData
SET Exported = #Now,
ExportExpires = #Expire,
ExportSession = #ExportSession
OUTPUT Inserted.ID INTO #ExportedIDs
WHERE ID IN ( SELECT TOP(#ArraySize) GD.ID
FROM dbo.GameData GD
WHERE GD.Exported IS NULL
ORDER BY GD.ID ASC)
The idea here is to set a record as exported first using an UPDATE with OUTPUT (remembering record id), so no other thread can pick it up again. When record is set as exported, then I can do some extra calculations and pass the data to the external system hoping that no other thread will pick this same record again in the mean time. Since the UPDATE that has in mind to secure the record first.
Unfortunately it doesn't seem to be working and the application sometimes pick same record twice anyway.
How to prevent it?
Kind regards
Mariusz
I think you should be able to do this atomically using a common table expression. (I'm not 100% certain about this, and I haven't tested, so you'll need to verify that it works for you in your situation.)
;WITH cte AS
(
SELECT TOP(#ArrayCount)
ID, Exported, ExportExpires, ExportSession
FROM dbo.GameData WITH (READPAST)
WHERE Exported IS NULL
ORDER BY ID
)
UPDATE cte
SET Exported = #Now,
ExportExpires = #Expire,
ExportSession = #ExportSession
OUTPUT INSERTED.ID INTO #ExportedIDs
I have a similar set up and I use sp_getapplock. My application runs many threads and they call a stored procedure to get the ID of the element that has to be processed. sp_getapplock guarantees that the same ID would not be chosen by two different threads.
I have a MyTable with a list of IDs that my application checks in an infinite loop using many threads. For each ID there are two datetime columns: LastCheckStarted and LastCheckCompleted. They are used to determine which ID to pick. Stored procedure picks an ID that wasn't checked for the longest period. There is also a hard-coded period of 20 minutes - the same ID can't be checked more often than every 20 minutes.
CREATE PROCEDURE [dbo].[GetNextIDToCheck]
-- Add the parameters for the stored procedure here
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
BEGIN TRANSACTION;
BEGIN TRY
DECLARE #VarID int = NULL;
DECLARE #VarLockResult int;
EXEC #VarLockResult = sp_getapplock
#Resource = 'SomeUniqueName_app_lock',
#LockMode = 'Exclusive',
#LockOwner = 'Transaction',
#LockTimeout = 60000,
#DbPrincipal = 'public';
IF #VarLockResult >= 0
BEGIN
-- Acquired the lock
-- Find ID that wasn't checked for the longest period
SELECT TOP 1
#VarID = ID
FROM
dbo.MyTable
WHERE
LastCheckStarted <= LastCheckCompleted
-- this ID is not being checked right now
AND LastCheckCompleted < DATEADD(minute, -20, GETDATE())
-- last check was done more than 20 minutes ago
ORDER BY LastCheckCompleted;
-- Start checking
UPDATE dbo.MyTable
SET LastCheckStarted = GETDATE()
WHERE ID = #VarID;
-- There is no need to explicitly verify if we found anything.
-- If #VarID is null, no rows will be updated
END;
-- Return found ID, or no rows if nothing was found,
-- or failed to acquire the lock
SELECT
#VarID AS ID
WHERE
#VarID IS NOT NULL
;
COMMIT TRANSACTION;
END TRY
BEGIN CATCH
ROLLBACK TRANSACTION;
END CATCH;
END
The second procedure is called by an application when it finishes checking the found ID.
CREATE PROCEDURE [dbo].[SetCheckComplete]
-- Add the parameters for the stored procedure here
#ParamID int
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
BEGIN TRANSACTION;
BEGIN TRY
DECLARE #VarLockResult int;
EXEC #VarLockResult = sp_getapplock
#Resource = 'SomeUniqueName_app_lock',
#LockMode = 'Exclusive',
#LockOwner = 'Transaction',
#LockTimeout = 60000,
#DbPrincipal = 'public';
IF #VarLockResult >= 0
BEGIN
-- Acquired the lock
-- Completed checking the given ID
UPDATE dbo.MyTable
SET LastCheckCompleted = GETDATE()
WHERE ID = #ParamID;
END;
COMMIT TRANSACTION;
END TRY
BEGIN CATCH
ROLLBACK TRANSACTION;
END CATCH;
END
It does not work because multiple transactions might first execute the IN clause and find the same set of rows, then update multiple times and overwrite each other.
LukeH's answer is best, accept it.
You can also fix it by adding AND Exported IS NULL to cancel double updates.
Or, make this SERIALIZABLE. This will lead to some blocking and deadlocking. This can safely be handled by timeouts and retry in case of deadlock. SERIALIZABLE is always safe for all workloads but it might block/deadlock more often.

Deleting 1 millions rows in SQL Server

I am working on a client's database and there is about 1 million rows that need to be deleted due to a bug in the software. Is there an efficient way to delete them besides:
DELETE FROM table_1 where condition1 = 'value' ?
Here is a structure for a batched delete as suggested above. Do not try 1M at once...
The size of the batch and the waitfor delay are obviously quite variable, and would depend on your servers capabilities, as well as your need to mitigate contention. You may need to manually delete some rows, measuring how long they take, and adjust your batch size to something your server can handle. As mentioned above, anything over 5000 can cause locking (which I was not aware of).
This would be best done after hours... but 1M rows is really not a lot for SQL to handle. If you watch your messages in SSMS, it may take a while for the print output to show, but it will after several batches, just be aware it won't update in real-time.
Edit: Added a stop time #MAXRUNTIME & #BSTOPATMAXTIME. If you set #BSTOPATMAXTIME to 1, the script will stop on it's own at the desired time, say 8:00AM. This way you can schedule it nightly to start at say midnight, and it will stop before production at 8AM.
Edit: Answer is pretty popular, so I have added the RAISERROR in lieu of PRINT per comments.
DECLARE #BATCHSIZE INT, #WAITFORVAL VARCHAR(8), #ITERATION INT, #TOTALROWS INT, #MAXRUNTIME VARCHAR(8), #BSTOPATMAXTIME BIT, #MSG VARCHAR(500)
SET DEADLOCK_PRIORITY LOW;
SET #BATCHSIZE = 4000
SET #WAITFORVAL = '00:00:10'
SET #MAXRUNTIME = '08:00:00' -- 8AM
SET #BSTOPATMAXTIME = 1 -- ENFORCE 8AM STOP TIME
SET #ITERATION = 0 -- LEAVE THIS
SET #TOTALROWS = 0 -- LEAVE THIS
WHILE #BATCHSIZE>0
BEGIN
-- IF #BSTOPATMAXTIME = 1, THEN WE'LL STOP THE WHOLE JOB AT A SET TIME...
IF CONVERT(VARCHAR(8),GETDATE(),108) >= #MAXRUNTIME AND #BSTOPATMAXTIME=1
BEGIN
RETURN
END
DELETE TOP(#BATCHSIZE)
FROM SOMETABLE
WHERE 1=2
SET #BATCHSIZE=##ROWCOUNT
SET #ITERATION=#ITERATION+1
SET #TOTALROWS=#TOTALROWS+#BATCHSIZE
SET #MSG = 'Iteration: ' + CAST(#ITERATION AS VARCHAR) + ' Total deletes:' + CAST(#TOTALROWS AS VARCHAR)
RAISERROR (#MSG, 0, 1) WITH NOWAIT
WAITFOR DELAY #WAITFORVAL
END
BEGIN TRANSACTION
DoAgain:
DELETE TOP (1000)
FROM <YourTable>
IF ##ROWCOUNT > 0
GOTO DoAgain
COMMIT TRANSACTION
Maybe this solution from Uri Dimant
WHILE 1 = 1
BEGIN
DELETE TOP(2000)
FROM Foo
WHERE <predicate>;
IF ##ROWCOUNT < 2000 BREAK;
END
(Link: https://social.msdn.microsoft.com/Forums/sqlserver/en-US/b5225ca7-f16a-4b80-b64f-3576c6aa4d1f/how-to-quickly-delete-millions-of-rows?forum=transactsql)
Here is something I have used:
If the bad data is mixed in with the good-
INSERT INTO #table
SELECT columns
FROM old_table
WHERE statement to exclude bad rows
TRUNCATE old_table
INSERT INTO old_table
SELECT columns FROM #table
Not sure how good this would be but what if you do like below (provided table_1 is a stand alone table; I mean no referenced by other table)
create a duplicate table of table_1 like table_1_dup
insert into table_1_dup select * from table_1 where condition1 <> 'value';
drop table table_1
sp_rename table_1_dup table_1
If you cannot afford to get the database out of production while repairing, do it in small batches. See also: How to efficiently delete rows while NOT using Truncate Table in a 500,000+ rows table
If you are in a hurry and need the fastest way possible:
take the database out of production
drop all non-clustered indexes and triggers
delete the records (or if the majority of records is bad, copy+drop+rename the table)
(if applicable) fix the inconsistencies caused by the fact that you dropped triggers
re-create the indexes and triggers
bring the database back in production

Stored Procedure Does Not Fire Last Command

On our SQL Server (Version 10.0.1600), I have a stored procedure that I wrote.
It is not throwing any errors, and it is returning the correct values after making the insert in the database.
However, the last command spSendEventNotificationEmail (which sends out email notifications) is not being run.
I can run the spSendEventNotificationEmail script manually using the same data, and the notifications show up, so I know it works.
Is there something wrong with how I call it in my stored procedure?
[dbo].[spUpdateRequest](#packetID int, #statusID int output, #empID int, #mtf nVarChar(50)) AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
DECLARE #id int
SET #id=-1
-- Insert statements for procedure here
SELECT A.ID, PacketID, StatusID
INTO #act FROM Action A JOIN Request R ON (R.ID=A.RequestID)
WHERE (PacketID=#packetID) AND (StatusID=#statusID)
IF ((SELECT COUNT(ID) FROM #act)=0) BEGIN -- this statusID has not been entered. Continue
SELECT ID, MTF
INTO #req FROM Request
WHERE PacketID=#packetID
WHILE (0 < (SELECT COUNT(ID) FROM #req)) BEGIN
SELECT TOP 1 #id=ID FROM #req
INSERT INTO Action (RequestID, StatusID, EmpID, DateStamp)
VALUES (#id, #statusID, #empID, GETDATE())
IF ((#mtf IS NOT NULL) AND (0 < LEN(RTRIM(#mtf)))) BEGIN
UPDATE Request SET MTF=#mtf WHERE ID=#id
END
DELETE #req WHERE ID=#id
END
DROP TABLE #req
SELECT #id=##IDENTITY, #statusID=StatusID FROM Action
SELECT TOP 1 #statusID=ID FROM Status
WHERE (#statusID<ID) AND (-1 < Sequence)
EXEC spSendEventNotificationEmail #packetID, #statusID, 'http:\\cpweb:8100\NextStep.aspx'
END ELSE BEGIN
SET #statusID = -1
END
DROP TABLE #act
END
Idea of how the data tables are connected:
From your comments I get you do mainly C# development. A basic test is to make sure the sproc is called with the exact same arguments you expect
PRINT '#packetID: ' + #packetID
PRINT '#statusID: ' + #statusID
EXEC spSendEventNotificationEmail #packetID, #statusID, 'http:\\cpweb:8100\NextStep.aspx'
This way you 1. know that the exec statement is reached 2. the exact values
If this all works than I very good candidate is that you have permission to run the sproc and your (C#?) code that calls it doesn't. I would expect that an error is thrown tough.
A quick test to see if the EXEC is executed fine is to do an insert in a dummy table after it.
Update 1
I suggested to add PRINT statements but indeed as you say you cannot (easily) catch them from C#. What you could do is insert the 2 variables in a log table that you newly create. This way you know the exact values that flow from the C# execution.
As to the why it now works if you add permissions I can't give you a ready answer. SQL security is not transparent to me either. But its good to research yourself a but further. Do you have to add both guest and public?
It would also help to see what's going inside spSendEventNotificationEmail. Chances are good that sproc is using a resource where it didn't have permission before. This could be an object like a table or maybe another sproc. Security is heavily dependent on context/settings and not an easy problem to tackle with a Q/A site like SO.

How to delete records in SQL 2005 keeping transaction logs in check

I am running the following stored procedure to delete large number of records. I understand that the DELETE statement writes to the transaction log and deleting many rows will make the log grow.
I have looked into other options of creating tables and inserting records to keep and then Truncating the source, this method will not work for me.
How can I make my stored procedure below more efficient while making sure that I keep the transaction log from growing unnecessarily?
CREATE PROCEDURE [dbo].[ClearLog]
(
#Age int = 30
)
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
-- DELETE ERRORLOG
WHILE EXISTS ( SELECT [LogId] FROM [dbo].[Error_Log] WHERE DATEDIFF( dd, [TimeStamp], GETDATE() ) > #Age )
BEGIN
SET ROWCOUNT 10000
DELETE [dbo].[Error_Log] WHERE DATEDIFF( dd, [TimeStamp], GETDATE() ) > #Age
WAITFOR DELAY '00:00:01'
SET ROWCOUNT 0
END
END
Here is how I would do it:
CREATE PROCEDURE [dbo].[ClearLog] (
#Age int = 30)
AS
BEGIN
SET NOCOUNT ON;
DECLARE #d DATETIME
, #batch INT;
SET #batch = 10000;
SET #d = DATEADD( dd, -#Age, GETDATE() )
WHILE (1=1)
BEGIN
DELETE TOP (#batch) [dbo].[Error_Log]
WHERE [Timestamp] < #d;
IF (0 = ##ROWCOUNT)
BREAK
END
END
Make the Tiemstamp comparison SARGable
Separate the GETDATE() at the start of batch to produce a consistent run (otherwise it can block in an infinite loop as new records 'age' as the old ones are being deleted).
use TOP instead of SET ROWCOUNT (deprecated: Using SET ROWCOUNT will not affect DELETE, INSERT, and UPDATE statements in the next release of SQL Server.)
check ##ROWCOUNT to break the loop instead of redundant SELECT
Assuming you have the option of rebuilding the error log table on a partition scheme one option would be to partition the table on date and swap out the partitions. Do a google search for 'alter table switch partition' to dig a bit further.
how about you run it more often, and delete fewer rows each time? Run this every 30 minutes:
CREATE PROCEDURE [dbo].[ClearLog]
(
#Age int = 30
)
AS
BEGIN
SET NOCOUNT ON;
SET ROWCOUNT 10000 --I assume you are on an old version of SQL Server and can't use TOP
DELETE dbo.Error_Log Where Timestamp>GETDATE()-#Age
WAITFOR DELAY '00:00:01' --why???
SET ROWCOUNT 0
END
the way it handles the dates will not truncate time, and you will only delete 30 minutes worth of data each time.
If your database is in FULL recovery mode, the only way to minimize the impact of your delete statements is to "space them out" -- only delete so many during a "transaction interval". For example, if you do t-log backups every hour, only delete, say, 20,000 rows per hour. That may not drop all you need all at once, but will things even out after 24 hours, or after a week?
If your database is in SIMPLE or BULK_LOGGED mode, breaking the deletes into chunks should do it. But, since you're already doing that, I'd have to guess your database is in FULL recover mode. (That, or the connection calling the procedure may be part of a transaction.)
A solution I have used in the past was to temporarily set the recovery model to "Bulk Logged", then back to "Full" at the end of the stored procedure:
DECLARE #dbName NVARCHAR(128);
SELECT #dbName = DB_NAME();
EXEC('ALTER DATABASE ' + #dbName + ' SET RECOVERY BULK_LOGGED')
WHILE EXISTS (...)
BEGIN
-- Delete a batch of rows, then WAITFOR here
END
EXEC('ALTER DATABASE ' + #dbName + ' SET RECOVERY FULL')
This will significantly reduce the transaction log consumption for large batches.
I don't like that it sets the recovery model for the whole database (not just for this session), but it's the best solution I could find.

Resources