sql cancelling query without reverting everything back - sql-server

I have been running many 'insert into' statement in one go with SQL server and sometimes while loop.
While
begin
insert into
end
Usually it takes very long to finish. Sometimes I have to cancel the query before it finish. When I do, it reverts everything it has inserted. Is there a way that I only cancel the 'insert into' it is running now and keep what has been inserted in the previous loop?
Many thanks.

You can just explicitly define where to start and end your transaction, then it should only roll back the 'current' transaction if it cancels half way through:
While
begin
BEGIN TRANSACTION;
insert into
COMMIT TRANSACTION;
end
Ensure that you understand the impact of this on your data's integrity before you apply it. E.g. checking each time you start the batch where you got to last time.

Related

Preventing blocking when using cursor over stored proc in transaction

I'm trying to work out how I can prevent blocking while running my code. There are a few things I could swap out, and I'm not sure which would be best. I'm including below some fake code to illustrate my current layout, please look past any syntax errors
SELECT ID
INTO #ID
FROM Table
DECLARE #TargetID int = 0
BEGIN TRAN
DECLARE ID_Cursor FOR
SELECT ID
FROM TABLE
OPEN ID_Cursor
FETCH NEXT FROM ID_Cursor
INTO #TargetID
WHILE ##FETCH_STATUS = 0
BEGIN
EXEC usp_ChangeID
#ID = #TargetID
#NewValue = 100
FETCH NEXT FROM ID_Cursor INTO #TargetID
END
CLOSE ID_Cursor
DEALLOCATE ID_Cursor
IF ((SELECT COUNT(*) FROM Table WHERE Value = 100) = 10000)
COMMIT TRAN
ELSE
ROLLBACK TRAN
The problem I'm encountering is that the usp_ChangeID in my code updates about 15 tables each run and other spids that want to work with any of those tables are needing to wait until the entire process is done running. The stored proc itself runs in about a second, but I need to run it repeatedly. I'm thinking that these locks are because of the transaction rather than the cursor itself, though I'm not 100% sure. Ideally, my code would finish one run of the stored proc, let other users through to the tables, then run again once that other operation is complete. The rows I'm working with each time shouldn't be frequently used so a row lock would be perfect, but the blocking I'm seeing implies that isn't happening.
This is running on production data so I want to leave as little impact as possible while this runs. Performance hits are fine if it means less blocking. I don't want to break this into chunks because I generally want this to only be saved if every record is updated as expected and rolled back in any other case. I can't modify the proc, go around the proc, or do this without a cursor involved either. I'm leaning towards breaking my initial large select into smaller chunks, but I'd rather not have to change parts manually.
Thanks in advance!

Why is my SQL transaction not executing a delete at the end?

I've got a simple SQL command that is supposed to read all of the records in from a table and then delete them all. Because there's a chance someone else could be writing to this table at the exact moment, I want to lock the table so that I'm sure that everything I delete is also everything I read.
BEGIN TRAN T1;
SELECT LotID FROM fsScannerIOInvalidCachedLots WITH (TABLOCK, HOLDLOCK);
DELETE FROM fsInvalidCachedLots;
COMMIT TRAN T1;
The really strange thing is, this USED to work. It worked for a while through testing, but now I guess something has changed because it's reading everything in, but it's not deleting any of the records. Consequently, SQL Server is spinning up high CPU usage because when this runs the next time it takes significantly longer to execute, which I assume has something to do with the lock.
Any idea what could be going on here? I've tried both TABLOCK and TABLOCKX
Update: Oh yea, something I forgot to mention, I can't query that table until after the next read the program does. What I mean is, after that statement is executed in code (and the command and connection are disposed of) if I try to query that table from within Management Studio it just hangs, which I assume means it's still locked. But then if I step through the calling program until I hit the next database connection, the moment after the first read the table is no longer locked.
Your SELECT retrieves data from a table named fsScannerIOInvalidCachedLots, but the delete is from a different table named fsInvalidCachedLots.
If you run this query in set xact_abort off, the transaction will not be aborted by the error from the invalid table name. In fact, select ##trancount will show you that there is an active transaction, and select xact_state() will return 1 meaning that it is active and no error has occurred.
On the other hand, with set xact_abort on, the transaction is aborted. select ##trancount will return 0, and select xact_state() will return 0 (no active transaction).
See ##trancount and xact_state() on MSDN for more information about them.

Does Begin Tran require a new name each time?

I used the following code:
--begin tran redist1
/*--FIRST Update
update db..tablename set column=value
where complexthing = othercomplexthing
*/
/*--SECOND Update
update db..tablename set column=replace(column,'A','1')
*/
select * from db..tablename
--rollback tran redist1
--commit tran redist1
I highlighted "begin tran redist1", ran it, highlighted the FIRST update statement and ran it, then did the same with the select statement. It worked, so I highlighted "commit tran redist1".
Next I highlighted "begin tran redist1", ran it, highlighted the SECOND update statement and ran it, then did the same with the select statement. It did not work, so I highlighted "rollback tran redist1".
Next I highlighted "begin tran redist1", ran it, highlighted the SECOND update statement and ran it, then did the same with the select statement. It worked this time, so I highlighted "commit tran redist1".
I used several more update statements, repeating this process each time. I then opened an "edit" window to change values directly after my last "commit", but SQL server kept timing out for that window alone, saying my "commit tran redist1" was the blocking transaction, despite having completed. I ran the commit again, and the edit window opened, showing the data as I had changed it.
This morning, I opened up the edit window again, and the table was somehow back to just after I ran the FIRST query + commit. All later queries + commits were lost. The record I edited manually in the edit window was still edited, however.
Note that each time, I used the name "redist1", ending with a commit or a rollback as appropriate for each transaction. My question is, was the reuse of the name the cause of my problem? Does the reuse of the name create a conflict of some type?
BEGIN TRAN doesn't require any name - the name is optional.
Transactions, however, can be nested and if you did not complete the first one, it will still be in effect even after the others have completed - if you roll it back, they will all roll back.
What probably happened is that you did not commit a transaction, so the following transactions were nested. When the transaction rolled back (possibly due to a timeout), they all rolled back.

How can I ensure that nested transactions are committed independently of each other?

If I have a stored procedure that executes another stored procedure several times with different arguments, is it possible to have each of these calls commit independently of the others?
In other words, if the first two executions of the nested procedure succeed, but the third one fails, is it possible to preserve the results of the first two executions (and not roll them back)?
I have a stored procedure defined something like this in SQL Server 2000:
CREATE PROCEDURE toplevel_proc ..
AS
BEGIN
...
while #row_count <= #max_rows
begin
select #parameter ... where rownum = #row_count
exec nested_proc #parameter
select #row_count = #row_count + 1
end
END
First off, there is no such thing as a nested transaction in SQL Server
However, you can use SAVEPOINTs as per this example (too long to reproduce here sorry) from fellow SO user Remus Rusanu
Edit: AlexKuznetsov mentioned (he deleted his answer though) that this won't work if a transaction is doomed. This can happen with SET XACT_ABORT ON or some trigger errors.
From BOL:
ROLLBACK TRANSACTION without a
savepoint_name or transaction_name
rolls back to the beginning of the
transaction. When nesting
transactions, this same statement
rolls back all inner transactions to
the outermost BEGIN TRANSACTION
statement.
I also found the following from another thread here:
Be aware that SQL Server transactions
aren't really nested in the way you
might think. Once an explict
transaction is started, a subsequent
BEGIN TRAN increments ##TRANCOUNT
while a COMMIT decrements the value.
The entire outmost transaction is
committed when a COMMIT results in a
zero ##TRANCOUNT. But a ROLLBACK
without a savepoint rolls back all
work including the outermost
transaction.
If you need nested transaction
behavior, you'll need to use SAVE
TRANSACTION instead of BEGIN TRAN and
use ROLLBACK TRAN [savepoint_name]
instead of ROLLBACK TRAN.
So it would appear possible.

Sql Server: chunking deletes still fills up transaction log; on fail all deletes are rolled back - why?

Here is my scenario: we have a database, let's call it Logging, with a table that holds records from Log4Net (via MSMQ). The db's recovery mode is set to Simple: we don't care about the transaction logs -- they can roll over.
We have a job that uses data from sp_spaceused to determine if we've met a certain size threshold. If the threshold is exceeded, we determine how many rows need to be deleted to bring the size down to x percent of that threshold. (As an aside, I'm using exec sp_spaceused MyLogTable, TRUE to get the number of rows and a rough approximation of their average size, although I'm not convinced that's the best way to go about it. But that's a different issue.)
I then try to chunk deletes (say, 5000 at a time) by looping a call to a sproc that basically does this:
DELETE TOP (#RowsToDelete) FROM [dbo].[MyLogTable]
until I've deleted what needs to be deleted.
Here's the issue: If I have a lot of rows to delete, the transaction log file fills up. I can watch it grow by running
dbcc sqlperf (logspace)
What puzzles me is that, when the job fails, ALL deleted rows get rolled back. In other words, it appears all the chunks are getting wrapped (somehow) in an implicit transaction.
I've tried expressly setting implicit transactions off, wrapping each DELETE statement in a BEGIN and COMMIT TRAN, but to no avail: either all deleted chunks succeed, or none at all.
I know the simple answer is, Make your log file big enough to handle the largest possible number of records you'd ever delete, but still, why is this being treated as a single transaction?
Sorry if I missed something easy, but I've looked at a lot of posts regarding log file growth, recovery modes, etc., and I can't figure this out.
One other thing: Once the job has failed, the log file stays up at around 95 - 100 percent full for a while before it drops back. However, if I run
checkpoint
dbcc dropcleanbuffers
it drops right back down to about 5 percent utilization.
TIA.
The log file in simple recovery model is truncated automatically every checkpoint gererally speaking. You can invoke checkpoint manually as you do at the end of the loop, but you can also do it every iteration. The frequency of checkpoints is by default determined automatically by sql server based on the recovery interval setting.
As far as the 'all deletes are rolled back', I don't see other explanation but an external transaction. Can you post entire code that cleans up the log? How do you invoke this code?
What is your setting of implicit transactions?
Hm.. if the log grows and doesn't truncate automatically, it may also indicate that there is a transaction running outside of the loop. Can you select ##trancount before your loop and perhaps with each iteration to find out what's going on?
Well, I tried several things, but still all deletes get rolled back. I added printint ##TRANCOUNT both before and after the delete and I get zero as the count. Yet, on failure, all deletes are rolled back .... I added SET IMPLICIT_TRANSACTIONS OFF in several places (including within my initial call from Query Analyzer, but that does not seem to help. This is the body of the stored procedure that is being called (I have set #RowsToDelete to 5000 and 8000):
SET NOCOUNT ON;
print N'##TRANCOUNT PRIOR TO DELETE: ' + CAST(##TRANCOUNT AS VARCHAR(20));
set implicit_transactions off;
WITH RemoveRows AS
(
SELECT ROW_NUMBER() OVER(ORDER BY [Date] ASC) AS RowNum
FROM [dbo].[Log4Net]
)
DELETE FROM RemoveRows
WHERE RowNum < #RowsToDelete + 1
print N'##TRANCOUNT AFTER DELETE: ' + CAST(##TRANCOUNT AS VARCHAR(20));
It is called from this t-sql:
WHILE #RowsDeleted < #RowsToDelete
BEGIN
EXEC [dbo].[DeleteFromLog4Net] #RowsToDelete
SET #RowsDeleted = #RowsDeleted + #RowsToDelete
Set #loops = #loops + 1
print 'Loop: ' + cast(#loops as varchar(10))
END
I have to admit I am puzzled. I am not a DB guru, but I thought I understood enough to figure this out....

Resources