Preventing blocking when using cursor over stored proc in transaction - sql-server

I'm trying to work out how I can prevent blocking while running my code. There are a few things I could swap out, and I'm not sure which would be best. I'm including below some fake code to illustrate my current layout, please look past any syntax errors
SELECT ID
INTO #ID
FROM Table
DECLARE #TargetID int = 0
BEGIN TRAN
DECLARE ID_Cursor FOR
SELECT ID
FROM TABLE
OPEN ID_Cursor
FETCH NEXT FROM ID_Cursor
INTO #TargetID
WHILE ##FETCH_STATUS = 0
BEGIN
EXEC usp_ChangeID
#ID = #TargetID
#NewValue = 100
FETCH NEXT FROM ID_Cursor INTO #TargetID
END
CLOSE ID_Cursor
DEALLOCATE ID_Cursor
IF ((SELECT COUNT(*) FROM Table WHERE Value = 100) = 10000)
COMMIT TRAN
ELSE
ROLLBACK TRAN
The problem I'm encountering is that the usp_ChangeID in my code updates about 15 tables each run and other spids that want to work with any of those tables are needing to wait until the entire process is done running. The stored proc itself runs in about a second, but I need to run it repeatedly. I'm thinking that these locks are because of the transaction rather than the cursor itself, though I'm not 100% sure. Ideally, my code would finish one run of the stored proc, let other users through to the tables, then run again once that other operation is complete. The rows I'm working with each time shouldn't be frequently used so a row lock would be perfect, but the blocking I'm seeing implies that isn't happening.
This is running on production data so I want to leave as little impact as possible while this runs. Performance hits are fine if it means less blocking. I don't want to break this into chunks because I generally want this to only be saved if every record is updated as expected and rolled back in any other case. I can't modify the proc, go around the proc, or do this without a cursor involved either. I'm leaning towards breaking my initial large select into smaller chunks, but I'd rather not have to change parts manually.
Thanks in advance!

Related

Stored procedure to constantly check table for records and process them

We have a process that is causing dirty read errors. So, I was thinking of redesigning it as a queue with a single process to go through the queue.
My idea was to create a table that various processes could insert into. Then, one process actually processes the records in the table. But, we need real-time results from the processing method. So, while I was thinking of scheduling it to run every couple of seconds, that may not be fast enough. I don't want a user waiting several seconds for a result.
Essentially, I was thinking of using an infinite loop so that one stored procedure is constantly running, and that stored procedure creates transactions to perform updates.
It could be something like:
WHILE 1=1
BEGIN
--Check for new records
IF NewRecordsExist
BEGIN
--Mark new records as "in process"
BEGIN TRANSACTION
--Process records
--If errors, Rollback
--Otherwise Commit transaction
END
END
But, I don't want SQL Server to get overburdened by this one method. Basically, I want this running in the background all the time, not eating up processor power. Unless there is a lot for it to do. Then, I want it doing its work. Is there a better design pattern for this? Are stored procedures with infinite loops thread-safe? I use this pattern all the time in Windows Processes, but this particular task is not appropriate for a Windows Process.
Update: We have transaction locking set. That is not the issue. We are trying to set aside items that are reserved for orders. So, we have a stored procedure start a transaction, check what is available, and then update the table to mark what is reserved.
The problem is that when two different users attempt to reserve the same product at the same time, the first process checks availability, finds product available, then start to reserve it. But, the second process cannot see what the first process is doing (we have transaction locking set), so it has no idea that another process is trying to reserve the items. It sees the items as still available and also goes to reserve them.
We considered application locking, but we are worried about waits, delays, etc. So, another solution we came up with is one process that handles reservations in a queue. It is first come first serve. Only one process will ever be reading the queue at a time. Different processes can add to the queue, but we no longer need to worry about two processes trying to reserve the same product at the same time. Only one process will be doing the reservations. I was hoping to do this all in SQL, but that may not be possible.
Disclaimer: This may be an option, but the recommendations for using a Service Broker to serialize requests are likely the better solutions.
If you can't use a transaction, but need your your stored procedure to return an immediate result, there are ways to safely update a record in a single statement.
DECLARE #ProductId INT = 123
DECLARE #Quantity INT = 5
UPDATE Inventory
SET Available = Available - #Quantity
WHERE ProductId = #ProductId
AND Available >= #Quantity
IF ##ROW_COUNT > 0
BEGIN
-- Success
END
Under the covers, there is still a transaction occurring accompanied by a lock, but it just covers this one statement.
If you need to update multiple records (reserve multiple product IDs) in one statement, you can use the OUTPUT clause to capture which records were successfully updated, which you can then compare with the original request.
DECLARE #Request TABLE (ProductId INT, Quantity INT)
DECLARE #Result TABLE (ProductId INT, Quantity INT)
INSERT #Request VALUES (123, 5), (456, 1)
UPDATE I
SET Available = Available - R.Quantity
OUTPUT R.ProductId, R.Quantity INTO #Result
FROM #Request R
JOIN Inventory I
ON I.ProductId = R.ProductId
AND I.Available >= R.#Quantity
IF (SELECT COUNT(*) FROM #Request) = (SELECT COUNT(*) FROM #Result)
BEGIN
-- Success
END
ELSE BEGIN
-- Partial or fail. Need to undo those that may have been updated.
END
In any case, you need to thoroughly think through your error handling and undo scenarios.
If you store reservations separate from inventory and define "Available" as "Inventory.OnHand - SUM(Reserved.Quantity)", this approach is not an option.
(Comments as to why this a bad approach will likely follow.)

sql cancelling query without reverting everything back

I have been running many 'insert into' statement in one go with SQL server and sometimes while loop.
While
begin
insert into
end
Usually it takes very long to finish. Sometimes I have to cancel the query before it finish. When I do, it reverts everything it has inserted. Is there a way that I only cancel the 'insert into' it is running now and keep what has been inserted in the previous loop?
Many thanks.
You can just explicitly define where to start and end your transaction, then it should only roll back the 'current' transaction if it cancels half way through:
While
begin
BEGIN TRANSACTION;
insert into
COMMIT TRANSACTION;
end
Ensure that you understand the impact of this on your data's integrity before you apply it. E.g. checking each time you start the batch where you got to last time.

How can I set a lock inside a stored procedure?

I've got a long-running stored procedure on a SQL server database. I don't want it to run more often than once every ten minutes.
Once the stored procedure has run, I want to store the latest result in a LatestResult table, against a time, and have all calls to the procedure return that result for the next ten minutes.
That much is relatively simple, but we've found that, because the procedure checks the LatestResult table and updates it, that large userbases are getting a number of deadlocks, when two users call the procedure at the same time.
In a client-side/threading situation, I would solve this by using a lock, having the first user lock the function, the second user encounters the lock, waiting for the result, the first user finishes their procedure call, updates the LatestResult table, and unlocks the second user, who then picks up the result from the LatestResult table.
Is there any way to accomplish this kind of locking in SQL Server?
EDIT:
This is basically how the code looks without its error checking calls:
DECLARE #LastChecked AS DATETIME
DECLARE #LastResult AS NUMERIC(18,2)
SELECT TOP 1 #LastChecked = LastRunTime, #LastResult = LastResult FROM LastResult
DECLARE #ReturnValue AS NUMERIC(18,2)
IF DATEDIFF(n, #LastChecked, GetDate()) >= 10 OR NOT #LastResult = 0
BEGIN
SELECT #ReturnValue = ABS(ISNULL(SUM(ISNULL(Amount,0)),0)) FROM Transactions WHERE ISNULL(DeletedFlag,0) = 0 GROUP BY GroupID ORDER BY ABS(ISNULL(SUM(ISNULL(Amount,0)),0))
UPDATE LastResult SET LastRunTime = GETDATE(), LastResult = #ReturnValue
SELECT #ReturnValue
END
ELSE
BEGIN
SELECT #LastResult
END
I'm not really sure what's going on with the grouping, but I've found a test system where execution time is coming in around 4 seconds.
I think there's some work scheduled to archive some of these records and boil them down to running totals, which will probably help things given that there's several million rows in that four second table...
This is a valid opportunity to use an Application Lock (see sp_getapplock and sp_releaseapplock) as it is a lock taken out on a concept that you define, not on any particular rows in any given table. The idea is that you create a transaction, then create this arbitrary lock that has an indetifier, and other processes will wait to enter that piece of code until the lock is released. This works just like lock() at the app layer. The #Resource parameter is the label of the arbitrary "concept". In more complex situations, you can even concatenate a CustomerID or something in there for more granular locking control.
DECLARE #LastChecked DATETIME,
#LastResult NUMERIC(18,2);
DECLARE #ReturnValue NUMERIC(18,2);
BEGIN TRANSACTION;
EXEC sp_getapplock #Resource = 'check_timing', #LockMode = 'Exclusive';
SELECT TOP 1 -- not sure if this helps the optimizer on a 1 row table, but seems ok
#LastChecked = LastRunTime,
#LastResult = LastResult
FROM LastResult;
IF (DATEDIFF(MINUTE, #LastChecked, GETDATE()) >= 10 OR #LastResult <> 0)
BEGIN
SELECT #ReturnValue = ABS(ISNULL(SUM(ISNULL(Amount, 0)), 0))
FROM Transactions
WHERE DeletedFlag = 0
OR DeletedFlag IS NULL;
UPDATE LastResult
SET LastRunTime = GETDATE(),
LastResult = #ReturnValue;
END;
ELSE
BEGIN
SET #ReturnValue = #LastResult; -- This is always 0 here
END;
SELECT #ReturnValue AS [ReturnValue];
EXEC sp_releaseapplock #Resource = 'check_timing';
COMMIT TRANSACTION;
You need to manage errors / ROLLBACK yourself (as stated in the linked MSDN documentation) so put in the usual TRY / CATCH. But, this does allow you to manage the situation.
If there are any concerns regarding contention on this process, there shouldn't be much as the lookup done right after locking the resource is a SELECT from a single-row table and then an IF statement that (ideally) just returns the last known value if the 10-minute timer hasn't elapsed. Hence, most calls should process rather quickly.
Please note: sp_getapplock / sp_releaseapplock should be used sparingly; Application Locks can definitely be very handy (such as in cases like this one) but they should only be used when absolutely necessary.

How can I make a stored procedure commit immediately?

EDIT This questions is no longer valid as the issue was something else. Please see my explanation below in my answer.
I'm not sure of the etiquette so i'l leave this question in its' current state
I have a stored procedure that writes some data to a table.
I'm using Microsoft Practices Enterprise library for making my stored procedure call.
I invoke the stored procedure using a call to ExecuteNonQuery.
After ExecuteNonQuery returns i invoke a 3rd party library. It calls back to me on a separate thread in about 100 ms.
I then invoke another stored procedure to pull the data I had just written.
In about 99% of cases the data is returned. Once in a while it returns no rows( ie it can't find the data). If I put a conditional break point to detect this condition in the debugger and manually rerun the stored procedure it always returns my data.
This makes me believe the writing stored procedure is working just not committing when its called.
I'm fairly novice when it comes to sql, so its entirely possible that I'm doing something wrong. I would have thought that the writing stored procedure would block until its contents were committed to the db.
Writing Stored Procedure
ALTER PROCEDURE [dbo].[spWrite]
#guid varchar(50),
#data varchar(50)
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
-- see if this guid has already been added to the table
DECLARE #foundGuid varchar(50);
SELECT #foundGuid = [guid] from [dbo].[Details] where [guid] = #guid;
IF #foundGuid IS NULL
-- first time we've seen this guid
INSERT INTO [dbo].[Details] ( [guid], data ) VALUES (#guid, #data)
ELSE
-- updaeting or verifying order
UPDATE [dbo].[Details] SET data =#data WHERE [guid] = #guid
END
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
Reading Stored Procedure
ALTER PROCEDURE [dbo].[spRead]
#guid varchar(50)
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
SELECT * from [dbo].[Details] where [guid] = #guid;
END
To actually block other transactions and manually commit,
maybe adding
BEGIN TRANSACTION
--place your
--transactions you wish to do here
--if everything was okay
COMMIT TRANSACTION
--or
--ROLLBACK TRANSACTION if something went wrong
could help you?
I’m not familiar with the data access tools you mention, but from your description I would guess that either the process does not wait for the stored procedure to complete execution before proceeding to the next steps, or ye olde “something else” is messing with the data in between your write and read calls.
One way to tell what’s going on is to use SQL Profiler. Fire it up, monitor all possible query execution events on the database (including stored procedure and stored procedures line start/stop events), watch the Text and Started/Ended columns, correlate this with the times you are seeing while tracing the application, and that should help you figure out what’s going on there. (SQL Profiler can be complex to use, but there are many sources on the web that explain it, and it is well worth learning how to use it.)
I'll leave my answer below as there are comments on it...
Ok, I feel shame I had simplified my question too much. What was actually happening is two things:
1) the inserting procedure is actually running on a separate machine( distributed system).
2) the inserting procedure actually inserts data into two tables without a transaction.
This means the query can run at the same time and find the tables in a state where one has been written to and the second table hasn't' yet had its write committed.
A simple transaction fixes this as the reading query can handle either case of no write or full write but couldn't handle the case of one table written to and the other having a pending commit.
Well it turns out that when I created the stored procedure the MSSQLadmin tool added a line to it by default:
SET NOCOUNT ON;
If I turn that to:
SET NOCOUNT OFF;
then my procedure actually commits to the database properly. Strange that this default would actually end up causing problems.
Easy way using try-catch, like it if useful
BEGIN TRAN
BEGIN try
INSERT INTO meals
(
...
)
Values(...)
COMMIT TRAN
END try
BEGIN catch
ROLLBACK TRAN
SET #resp = (convert(varchar,ERROR_LINE()), ERROR_MESSAGE() )
END catch

Sql Server: chunking deletes still fills up transaction log; on fail all deletes are rolled back - why?

Here is my scenario: we have a database, let's call it Logging, with a table that holds records from Log4Net (via MSMQ). The db's recovery mode is set to Simple: we don't care about the transaction logs -- they can roll over.
We have a job that uses data from sp_spaceused to determine if we've met a certain size threshold. If the threshold is exceeded, we determine how many rows need to be deleted to bring the size down to x percent of that threshold. (As an aside, I'm using exec sp_spaceused MyLogTable, TRUE to get the number of rows and a rough approximation of their average size, although I'm not convinced that's the best way to go about it. But that's a different issue.)
I then try to chunk deletes (say, 5000 at a time) by looping a call to a sproc that basically does this:
DELETE TOP (#RowsToDelete) FROM [dbo].[MyLogTable]
until I've deleted what needs to be deleted.
Here's the issue: If I have a lot of rows to delete, the transaction log file fills up. I can watch it grow by running
dbcc sqlperf (logspace)
What puzzles me is that, when the job fails, ALL deleted rows get rolled back. In other words, it appears all the chunks are getting wrapped (somehow) in an implicit transaction.
I've tried expressly setting implicit transactions off, wrapping each DELETE statement in a BEGIN and COMMIT TRAN, but to no avail: either all deleted chunks succeed, or none at all.
I know the simple answer is, Make your log file big enough to handle the largest possible number of records you'd ever delete, but still, why is this being treated as a single transaction?
Sorry if I missed something easy, but I've looked at a lot of posts regarding log file growth, recovery modes, etc., and I can't figure this out.
One other thing: Once the job has failed, the log file stays up at around 95 - 100 percent full for a while before it drops back. However, if I run
checkpoint
dbcc dropcleanbuffers
it drops right back down to about 5 percent utilization.
TIA.
The log file in simple recovery model is truncated automatically every checkpoint gererally speaking. You can invoke checkpoint manually as you do at the end of the loop, but you can also do it every iteration. The frequency of checkpoints is by default determined automatically by sql server based on the recovery interval setting.
As far as the 'all deletes are rolled back', I don't see other explanation but an external transaction. Can you post entire code that cleans up the log? How do you invoke this code?
What is your setting of implicit transactions?
Hm.. if the log grows and doesn't truncate automatically, it may also indicate that there is a transaction running outside of the loop. Can you select ##trancount before your loop and perhaps with each iteration to find out what's going on?
Well, I tried several things, but still all deletes get rolled back. I added printint ##TRANCOUNT both before and after the delete and I get zero as the count. Yet, on failure, all deletes are rolled back .... I added SET IMPLICIT_TRANSACTIONS OFF in several places (including within my initial call from Query Analyzer, but that does not seem to help. This is the body of the stored procedure that is being called (I have set #RowsToDelete to 5000 and 8000):
SET NOCOUNT ON;
print N'##TRANCOUNT PRIOR TO DELETE: ' + CAST(##TRANCOUNT AS VARCHAR(20));
set implicit_transactions off;
WITH RemoveRows AS
(
SELECT ROW_NUMBER() OVER(ORDER BY [Date] ASC) AS RowNum
FROM [dbo].[Log4Net]
)
DELETE FROM RemoveRows
WHERE RowNum < #RowsToDelete + 1
print N'##TRANCOUNT AFTER DELETE: ' + CAST(##TRANCOUNT AS VARCHAR(20));
It is called from this t-sql:
WHILE #RowsDeleted < #RowsToDelete
BEGIN
EXEC [dbo].[DeleteFromLog4Net] #RowsToDelete
SET #RowsDeleted = #RowsDeleted + #RowsToDelete
Set #loops = #loops + 1
print 'Loop: ' + cast(#loops as varchar(10))
END
I have to admit I am puzzled. I am not a DB guru, but I thought I understood enough to figure this out....

Resources