I am trying to delete table which just has around 39K records, but for some reasons it is taking time(Around 1.5 minutes) even to delete a single record. How can I improve the performance of my delete operation. How can I ensure that log activity is not taking much time. Can I put the "DELETE" statement within a while loop and then open a transaction and commit it each time it successfully completes. Any other effective method is available?
[PrimaryKey] here has a "Clustered Index"
DECLARE #BatchCount INT;
SELECT #BatchCount = COUNT(1) FROM #DHDID
DECLARE #Counter INT = 1
WHILE( #Counter <= #BatchCount)
BEGIN
BEGIN TRANSACTION
DECLARE #ID INT;
SELECT #ID = DHDID FROM #DHDID WHERE ID = #Counter
DELETE FROM <MYTABLE> WHERE [PrimaryKey] = #ID
COMMIT TRANSACTION
SET #Counter = #Counter + 1
END
Based on your answer, you should do a set-based delete via a join. Try something like this:
Begin Tran
Delete m
From <MyTable> m
Inner Join DHDID d
on d.DHDID = m.[PrimaryKey]
-- error detection code here
If <an error occurred>
Rollback
Else
Commit
I would try creating index on the #DHDID table:
CREATE NONCLUSTERED INDEX [idx] ON [#DHDID] ([ID] ASC) INCLUDE ([DHDID])
Related
I have the following query:
update largeTable
set largeTable_id ='NA';
I would like to know what are the best practices to perform that kind of updates if we talk about a 45m records table. Should I consider a cascade update? Or is this automatically done?
I have the below query as an example to perform the updates in separate batches to avoid tlog space issues :
DECLARE #i INT=1
WHILE (#i <= 10)
BEGIN
UPDATE TOP(20000) largeTable
SET largeTable_id = 'NA'
SET #i = #i + 1
END
So, that's pretty much the idea, any comment or suggestion will be appreciated.
Thanks in advance :).
Adding a new idea:
--T-SQL using the ROWCOUNT setting to control update size
SET ROWCOUNT 1000
WHILE (1 = 1)
BEGIN
BEGIN TRANSACTION
UPDATE tableB
SET TableB_TableA_id = 'NA';
IF ##ROWCOUNT = 0
BEGIN
COMMIT TRANSACTION
BREAK
END
COMMIT TRANSACTION
END
SET ROWCOUNT 0
The main goal is to perform that update in multiple batches avoiding issues in the tlog datafile and perform the cascade updates with no performance issues.
DECLARE #AffectedRows INT, #BatchSize INT;
SET #BatchSize = 5000;
SET #AffectedRows = #BatchSize;
WHILE (#Rows = #BatchSize)
BEGIN
UPDATE TOP (#BatchSize) tableB
SET TableB_TableA_id='NA'
WHERE TableB_TableA_id <> 'NA';
SET #AffectedRows = ##ROWCOUNT;
END;
How to use BEGIN TRANSACTION with while loop in SQL Server?
This query never finishes perhaps because it stops and look for COMMIT TRANSACTION after inserting one row (when #cnt = 1) but I don't want to COMMIT TRANSACTION because I want to see results before committing.
BEGIN TRANSACTION
DECLARE #cnt INT = 0;
WHILE #cnt <= 100
BEGIN
DECLARE #offset INT = 1
INSERT INTO totalSales (col1, col2)
SELECT
'Col1', ROW_NUMBER() OVER (ORDER BY col2) + #offset
FROM
sales
SET #cnt = #cnt + 1;
END;
So how I can check result before commit in while loop?
You should create a BEGIN TRAN outer (general), and inside loop while create a BEGIN TRAN inner (with a trans name).
Inside loop, if are conditions to rollbacks only for this iteration i use SAVE TRAN savepoint for not lose previous trans.
I 've created an example tests in loop while with conditional inserts and rollback savepoint:
declare #num int
set #num = 0
--drop table #test
create table #test (
valor Varchar(100)
)
begin tran
while #num <= 5
begin
begin transaction tran_inner
insert into #test (valor) values ('INSERT 1 INNER -> ' + convert(varchar(10),#num))
save transaction sv_inner
insert into #test (valor) values ('INSERT 2 EVEN - SAVEPOINT -> ' + convert(varchar(10),#num))
if #num % 2 = 0 begin
commit transaction sv_inner
end else begin
rollback transaction sv_inner
end
insert into #test (valor) values ('INSERT 3 INNER -> ' + convert(varchar(10),#num))
set #num = #num + 1
if ##trancount > 0 begin
commit transaction tran_inner
end
end
select valor from #test;
if ##trancount > 0 begin
commit tran
end
Return rows: 1, 2 if iteration even, and 3.
In the same batch (within the same transaction) you can simply issue a SELECT command to see the updated content of the table. Changes will be persisted when the COMMIT TRANSACTION statement is executed or reverted on ROLLBACK.
CREATE TABLE test (id INT IDENTITY(1,1), x VARCHAR(32));
GO
BEGIN TRANSACTION;
INSERT INTO test (x) VALUES ('a');
INSERT INTO test (x) VALUES ('b');
SELECT * FROM test;
ROLLBACK TRANSACTION;
Example: http://sqlfiddle.com/#!6/e4910/2
Alternatively you can use the INSERT INTO .. OUTPUT construct to output the result of the INSERT statement.
Docs: https://learn.microsoft.com/en-us/sql/t-sql/queries/output-clause-transact-sql
Outside the batch (using a second connection), you can use READ UNCOMMITTED isolation level to be able to read records not committed yet.
Docs: https://technet.microsoft.com/en-us/library/ms189122(v=sql.105).aspx
If you are saying it never finishes it sounds to me like you are getting some blocking going on because that loop runs just fine.
https://www.mssqltips.com/sqlservertip/2429/how-to-identify-blocking-in-sql-server/
I HIGHLY recommend using Adam Machanic's sp_WhoIsActive for this as well: http://whoisactive.com
Is there a better way to DELETE 80 million+ rows from a table?
WHILE EXISTS (SELECT TOP 1 * FROM large_table)
BEGIN
WITH LT AS
(
SELECT TOP 60000 *
FROM large_table
)
DELETE FROM LT
END
This does the job of keeping my transaction logs from becoming too large, but I need to know if there is a way to make this process go faster? I've had my computer on for 5+ days now running this script and I haven't gotten very far, very fast.
You can truncate the table simply by.
TRUNCATE TABLE large_table
GO
You can also use delete by using where condition. The time taken by delete depends on various aspects. You can reduce the cost by eliminating SELECT query in the condition of WHILE loop.
DECLARE #rows INT = 1
WHILE (#rows>0)
BEGIN
DELETE TOP 1000 *
FROM large_table
#rows = ##ROWCOUNT
END
Bulk deletion will create a lots of logs and rollback happen if the log file is full.
you can do the delete as batches and ensure every transaction is committed.
DECLARE #IDCollection TABLE (ID INT)
DECLARE #Batch INT = 1000;
DECLARE #ROWCOUNT INT;
WHILE (1 = 1)
BEGIN
BEGIN TRANSACTION;
INSERT INTO #IDCollection
SELECT TOP (#Batch) ID
FROM table
ORDER BY id
DELETE
FROM table
WHERE id IN (
SELECT *
FROM #IDCollection
)
SET #ROWCOUNT = ##ROWCOUNT
IF (#ROWCOUNT = 0)
BREAK
COMMIT TRANSACTION;
END
This is a Continuation of my previous question
sql update for dynamic row number
This time I am having an updated requirement.
I am having 2 tables
CraftTypes & EmployeeCraftTypes.
I need to update multiple rows in the CraftType Table and
I was able to update it as per the answer provided by TheGameiswar
Now there is a modification in the requirement.
In the table CraftTypes, there is a foreign key reference for the column CraftTypeKey with the table EmployeeCraftsTypes.
If there exist an entry for CraftTypeKey in the EmployeeCrafttypes table, then the row should not be updated.
Also the CraftTypeKey's whose row values are not updated must be obtained for returning the FK_restriction status of the rows.
This is the sql query I am using.
CREATE TYPE [DBO].[DEPARTMENTTABLETYPE] AS TABLE
( DepartmentTypeKey SMALLINT, DepartmentTypeName VARCHAR(50),DepartmentTypeCode VARCHAR(10) , DepartmentTypeDescription VARCHAR(128) )
ALTER PROCEDURE [dbo].[usp_UpdateDepartmentType]
#DEPARTMENTDETAILS [DBO].[DEPARTMENTTABLETYPE] READONLY
AS
BEGIN
SET NOCOUNT ON;
DECLARE #rowcount1 INT
BEGIN
BEGIN TRY
BEGIN TRANSACTION
UPDATE D1
SET
D1.[DepartmentTypeName]=D2.DepartmentTypeName
,D1.[DepartmentTypeCode]=D2.DepartmentTypeCode
,D1.[DepartmentTypeDescription]=D2.DepartmentTypeDescription
FROM
[dbo].[DepartmentTypes] D1
INNER JOIN
#DEPARTMENTDETAILS D2
ON
D1.DepartmentTypeKey=D2.DepartmentTypeKey
WHERE
D2.[DepartmentTypeKey] not in (select 1 from [dbo].[EmployeeDepartment] where [DepartmentTypeKey]=D2.DepartmentTypeKey)
SET #ROWCOUNT1=##ROWCOUNT
COMMIT
END TRY
BEGIN CATCH
SET #ROWCOUNT1=0
ROLLBACK TRAN
END CATCH
IF #rowcount1 =0
SELECT -174;
ELSE
SELECT 100;
END
END
Please Help
And Thanks in Advance
Ok
I think I figured out a way for it this time. I am not sure this is the right way, but its enough for me to meet the requirements.
I selected the distinct rows with FK reference from EmployeeCraftsTypes table as a second select query.
Now I can get the Row values which are not getting updated due to FK constraint.
This is the sql query I have used
ALTER PROCEDURE [dbo].[usp_UpdateCraftType]
#CRAFTDETAILS [DBO].[CRAFTTABLETYPE] READONLY
AS
BEGIN
SET NOCOUNT ON;
DECLARE #STATUSKEY TINYINT = (SELECT DBO.GETSTATUSKEY('ACTIVE'))
DECLARE #ROWCOUNT1 INT
BEGIN
BEGIN TRY
BEGIN TRANSACTION
UPDATE C1
SET
[C1].[CraftTypeName]=C2.CRAFTTYPENAME
,[C1].[CRAFTTYPEDESCRIPTION]=C2.CRAFTTYPEDESCRIPTION
,[C1].[StatusKey]=C2.[StatusKey]
FROM
[dbo].[CRAFTTYPES] C1
INNER JOIN
#CRAFTDETAILS C2
ON
C1.CRAFTTYPEKEY=C2.CRAFTTYPEKEY
WHERE
C2.[CRAFTTYPEKEY] NOT IN (SELECT EC.[CRAFTTYPEKEY] from [dbo].[EmployeeCrafts] EC where EC.[CRAFTTYPEKEY]=C2.[CRAFTTYPEKEY])
SET #ROWCOUNT1=##ROWCOUNT
COMMIT
END TRY
BEGIN CATCH
SET #ROWCOUNT1=0
ROLLBACK TRAN
END CATCH
--SET #ROWCOUNT1 = ##ROWCOUNT
IF #ROWCOUNT1 =0
SELECT -172;
ELSE
BEGIN
SELECT 100;
SELECT DISTINCT EC.[CRAFTTYPEKEY],'Value Already Assigned' as Reason
FROM [DBO].[EmployeeCrafts] EC
JOIN #CRAFTDETAILS C3
on C3.[CRAFTTYPEKEY]=EC.[CRAFTTYPEKEY]
WHERE EC.[CRAFTTYPEKEY]=C3.[CRAFTTYPEKEY]
END
END
END
Now in the Web API side I can check if there is any update failure by checking the rowcount for the second table.
If the row count is more than 0, then update error message can be generated
Hope it will be helpful to someone ....
I'm experiencing some problems that look a LOT like a transaction in a stored procedure has been rolled back, even though I'm fairly certain that it was committed, since the output variable isn't set until after the commit, and the user gets the value of the output variable (I know, because they print it out and I also set up a log table where i input the value of the output variable).
In theory someone COULD manually delete and update the data such that it would look like a rollback, but it is extremely unlikely.
So, I'm hoping someone can spot some kind of structural mistake in my stored procedure. Meet BOB:
CREATE procedure [dbo].[BOB] (#output_id int OUTPUT, #output_msg varchar(255) OUTPUT)
as
BEGIN
SET NOCOUNT ON
DECLARE #id int
DECLARE #record_id int
SET #output_id = 1
-- some preliminary if-statements that doesn't alter any data, but might do a RETURN
SET XACT_ABORT ON
BEGIN TRANSACTION
BEGIN TRY
--insert into table A
SET #id = SCOPE_IDENTITY()
--update table B
DECLARE csr cursor local FOR
SELECT [some stuff] and record_id
FROM temp_table_that_is_not_actually_a_temporary_table
open csr
fetch next from csr into [some variables], #record_id
while ##fetch_status=0
begin
--check type of item + if valid
IF (something)
BEGIN
SET SOME VARIABLE
END
ELSE
BEGIN
ROLLBACK TRANSACTION
SET #output_msg = 'item does not exist'
SET #output_id = 0
RETURN
END
--update table C
--update table D
--insert into table E
--execute some other stored procedure (without transactions)
if (something)
begin
--insert into table F
--update table C again
end
DELETE FROM temp_table_that_is_not_actually_a_temporary_table WHERE record_id=#record_id
fetch next from csr into [some variables], #record_id
end
close csr
deallocate csr
COMMIT TRANSACTION
SET #output_msg = 'ok'
SET #output_id = #id
END TRY
BEGIN CATCH
ROLLBACK TRANSACTION
SET #output_msg = 'transaction failed !'
SET #output_id = 0
INSERT INTO errors (record_time, sp_name, sp_msg, error_msg)
VALUES (getdate(), 'BOB', #output_msg, error_message())
END CATCH
RETURN
END
I know, my user gets an #output_id that is the SCOPE_IDENTITY() and he also gets an #output_msg that says 'ok'. Is there ANY way he can get those outputs without the transaction getting committed?
Thank you.
You know the problem is that transaction dose NOT support rollback on variables because there is no data change inside database. Either commit or rollback of the transactions ONLY make difference on those database objects (tables, temp table, etc.), NOT THE VARIABLES (including table variables).
--EDIT
declare #v1 int = 0, #v2 int = 0, #v3 int = 0
set #v2 = 1
begin tran
set #v1 = 1
commit tran
begin tran
set #v3 = 1
rollback tran
select #v1 as v1, #v2 as v2, #v3 as v3
RESULT is as follows
Personally I never used transactions in stored procedures, especially when they are used simultaniously by many people. I seriously avoid cursors as well.
I think I would go with passing the involved rows of temp_table_that_is_not_actually_a_temporary_table into a real temp table and then go with an if statement for all rows together. That's so simple in tsql:
select (data) into #temp from (normal_table) where (conditions).
What's the point of checking each row, doing the job and then rollback the whole thing if say the last row doesn't meet the condition? Do the check for all of them at once, do the job for all of them at once. That's what sql is all about.