Update the same table in the trigger - sql-server

I have a table which calculate the total deduction and the remaining deduction from the previous month.
And any update in the remaining deduction of old month should affect the next months in order, so I've created an AFTER UPDATE trigger for this table, but I've noticed that the trigger is not invoked by the UPDATE for the next month.
Check the trigger code
CREATE TRIGGER [dbo].[tgUpdateRemainingBalance]
ON [dbo].[GAT_MONTHLY_DEDUCTION]
AFTER UPDATE
AS
BEGIN
SET NOCOUNT ON;
DECLARE #OldTotalDeduction INT
DECLARE #NewTotalDeduction INT
DECLARE #OldRemainingToNextMonth INT
DECLARE #NewRemainingToNextMonth INT
SELECT #OldRemainingToNextMonth = RemainingToNextMonth,
#OldTotalDeduction = TotalDeduction
FROM DELETED
SELECT #NewRemainingToNextMonth = RemainingToNextMonth,
#NewTotalDeduction = TotalDeduction
FROM INSERTED
DECLARE #EmpNo BIGINT
DECLARE #RegDate DATETIME
SELECT #EmpNo = PersonnelNo, #RegDate = RegDate FROM INSERTED
IF (#OldRemainingToNextMonth <> #NewRemainingToNextMonth) OR (#OldTotalDeduction <> #NewTotalDeduction)
BEGIN
UPDATE GAT_MONTHLY_DEDUCTION
SET TotalDeduction = TotalDeduction - #OldRemainingToNextMonth + #NewRemainingToNextMonth,
Visited = 1
WHERE RegDate = DATEADD(month, 1, #RegDate) AND PersonnelNo = #EmpNo
END
END
I am doing updating for the same table which has this triggers but calling the trigger for the first time should call the trigger again for the next update but it is not happening.
It is urgent for me to know how to solve it.
Thanks

One thing to watch out for: the SQL Server trigger is fired once per statement (and NOT once per row as many developer think/assume)
If that statement updates more than one row, then the Inserted and Deleted pseudo tables will contain multiple rows - so your selects like
SELECT
#NewRemainingToNextMonth = RemainingToNextMonth,
#NewTotalDeduction = TotalDeduction
FROM INSERTED
will fail - which row should be selected, if there are fifty of them in Inserted??
You will need to rewrite your trigger to be able to deal with multiple rows being handled at once, in the Inserted and Deleted pseudo tables...

Related

How to logs all rows affected after delete trigger on multiple rows

I have table QuoteDetail with multiple rows with the same Quotation Number.
The records can be deleted using bulk delete 'Delete from QuoteDetail Where QuoteNo ='12345'
How can I logs all deleted rows using after delete trigger?
Below is my trigger script but only logs 1 row and the rest are discarded.
CREATE TRIGGER [dbo].[trgcsms_QuoteDetail_DeleteLogs] ON [CSMS].[dbo].[QuoteDetail]
AFTER DELETE
AS
BEGIN
SET NOCOUNT ON;
DECLARE #QuoteNo varchar(50)
DECLARE #SeqNo int
DECLARE #ItemCode varchar(50)
DECLARE #Qty float
DECLARE #NetPrice float
DECLARE #TotAmt float
SELECT #QuoteNo = DELETED.[QuoteNo]
,#SeqNo = DELETED.[SeqNo]
,#ItemCode = DELETED.[ItemCode]
,#Qty = DELETED.[Qty]
,#TotAmt = DELETED.[TotAmt]
FROM DELETED
INSERT INTO [dbo].[QuoteDetailDeleteLogs]
(
[QuoteNo]
,[SeqNo]
,[ItemCode]
,[Qty]
,[TotAmt]
)
VALUES
(
#QuoteNo
,#SeqNo
,#ItemCode
,#Qty
,#NetPrice
,#TotAmt
)
END
You almost got it righ. Just not using "#variables" and instead using tables. Because after all, you want multiple rows, right? That's also known as "a table".
Also worth noting - I took the liberty of upgrading your trigger to include UPDATE as well, because an update is a DELETE joined with an immediate INSERT afterwards. So I imagine if you update a line and QuoteNo (quote number) changed from '12345' to '54321' you would want that in your audit trail (your "QuoteDetailsLog").
[updated 30Aug2018 ]
CREATE TRIGGER [dbo].[trgcsms_QuoteDetail_DeleteLogs] ON [CSMS].[dbo].
[QuoteDetail]
AFTER DELETE, UPDATE
AS
BEGIN
INSERT INTO [dbo].[QuoteDetailDeleteLogs]
(
[QuoteNo]
,[SeqNo]
,[ItemCode]
,[Qty]
,[TotAmt]
)
SELECT [QuoteNo]
,[SeqNo]
,[ItemCode]
,[Qty]
,[TotAmt]
FROM DELETED
END
keeping it short and simple

SQL Trigger Inconsistently firing

I have a SQL Trigger on a table that works... most of the time. And I cannot figure out why sometimes the fields are NULL
The trigger works by Updateing the LastUpdateTime whenever something is modified in the field, and the InsertDatetime when first Created.
For some reason this only seems to work some times.
ALTER TRIGGER [dbo].[DateTriggerTheatreListHeaders]
ON [dbo].[TheatreListHeaders]
AFTER INSERT,UPDATE
AS
BEGIN
SET NOCOUNT ON;
IF NOT EXISTS(SELECT * FROM DELETED)
BEGIN
UPDATE ES
SET InsertDatetime = Getdate()
,LastUpdateDateTime = Getdate()
FROM TheatreListHeaders es
JOIN Inserted I ON es.UNIQUETHEATRELISTNUMBER = I.UNIQUETHEATRELISTNUMBER
END
IF UPDATE(LastUpdateDateTime) OR UPDATE(InsertDatetime)
RETURN;
IF EXISTS (
SELECT
*
FROM
INSERTED I
JOIN
DELETED D
-- make sure to compare inserted with (same) deleted person
ON D.UNIQUETHEATRELISTNUMBER = I.UNIQUETHEATRELISTNUMBER
)
BEGIN
UPDATE ES
SET InsertDatetime = ISNULL(es.Insertdatetime,Getdate())
,LastUpdateDateTime = Getdate()
FROM TheatreListHeaders es
JOIN Inserted I ON es.UNIQUETHEATRELISTNUMBER = I.UNIQUETHEATRELISTNUMBER
END
END
A much simpler and efficient approach to do what you are trying to do, would be something like...
ALTER TRIGGER [dbo].[DateTriggerTheatreListHeaders]
ON [dbo].[TheatreListHeaders]
AFTER INSERT,UPDATE
AS
BEGIN
SET NOCOUNT ON;
--Determine if this is an INSERT OR UPDATE Action .
DECLARE #Action as char(1);
SET #Action = (CASE WHEN EXISTS(SELECT * FROM INSERTED)
AND EXISTS(SELECT * FROM DELETED)
THEN 'U' -- Set Action to Updated.
WHEN EXISTS(SELECT * FROM INSERTED)
THEN 'I' -- Set Action to Insert.
END);
UPDATE ES
SET InsertDatetime = CASE WHEN #Action = 'U'
THEN ISNULL(es.Insertdatetime,Getdate())
ELSE Getdate()
END
,LastUpdateDateTime = Getdate()
FROM TheatreListHeaders es
JOIN Inserted I ON es.UNIQUETHEATRELISTNUMBER = I.UNIQUETHEATRELISTNUMBER;
END
"If update()" is poorly defined/implemented in sql server IMO. It does not do what is implied. The function only determines if the column was set by a value in the triggering statement. For an insert, every column is implicitly (if not explicitly) assigned a value. Therefore it is not useful in an insert trigger and difficult to use in a single trigger that supports both inserts and updates. Sometimes it is better to write separate triggers.
Are you aware of recursive triggers? An insert statement will execute your trigger which updates the same table. This causes the trigger to execute again, etc. Is the (database) recursive trigger option off (which is typical) or adjust your logic to support that?
What are your expectations for the insert/update/merge statements against this table? This goes back to your requirements. Is the trigger to ignore any attempt to set the datetime columns directly and set them within the trigger always?
And lastly, what exactly does "works sometimes" actually mean? Do you have a test case that reproduces your issue. If you don't, then you can't really "fix" the logic without a specific failure case. But the above comments should give you sufficient clues. To be honest, your logic seems to be overly complicated. I'll add that it also is logically flawed in the way that it set insertdatetime to getdate if the existing value is null during an update. IMO, it should reject any update that attempts to set the value to null because that is overwriting a fact that should never change. M.Ali has provided an example that is usable but includes the created timestamp problem. Below is an example that demonstrates a different path (assuming the recursive trigger option is off). It does not include the rejection logic - which you should consider. Notice the output of the merge execution carefully.
use tempdb;
set nocount on;
go
create table zork (id integer identity(1, 1) not null primary key,
descr varchar(20) not null default('zippy'),
created datetime null, modified datetime null);
go
create trigger zorktgr on zork for insert, update as
begin
declare #rc int = ##rowcount;
if #rc = 0 return;
set nocount on;
if update(created)
select 'created column updated', #rc as rc;
else
select 'created column NOT updated', #rc as rc;
if exists (select * from deleted) -- update :: do not rely on ##rowcount
update zork set modified = getdate()
where exists (select * from inserted as ins where ins.id = zork.id);
else
update zork set created = getdate(), modified = getdate()
where exists (select * from inserted as ins where ins.id = zork.id);
end;
go
insert zork default values;
select * from zork;
insert zork (descr) values ('bonk');
select * from zork;
update zork set created = null, descr = 'upd #1' where id = 1;
select * from zork;
update zork set descr = 'upd #2' where id = 1;
select * from zork;
waitfor delay '00:00:02';
merge zork as tgt
using (select 1 as id, 'zippity' as descr union all select 5, 'who me?') as src
on tgt.id = src.id
when matched then update set descr = src.descr
when not matched then insert (descr) values (src.descr)
;
select * from zork;
go
drop table zork;

SQL Server INSERT in an AFTER UPDATE Trigger

I'm new to SQL Server, and I'm trying to build a simple update trigger that writes a row to a staging table whenever the column ceu_amount is updated from zero to any number greater than zero.
From using PRINT statements, I know that the variables are containing the correct values to execute the INSERT statement, but no rows are being inserted.
Can you help?
CREATE TRIGGER [dbo].[TRG_Product_Function_Modified] ON [dbo].[Product_Function]
AFTER UPDATE
AS
BEGIN
SET NOCOUNT ON;
--
-- Variable definitions
--
DECLARE #product_code_new as varchar(31)
DECLARE #product_code_old as varchar(31)
--
-- Check if the staging table needs to be updated.
--
SELECT #product_code_new = product_code FROM Inserted where ISNULL(ceu_amount,0) > 0;
SELECT #product_code_old = product_code FROM Deleted where ISNULL(ceu_amount,0) = 0;
IF #product_code_new IS NOT NULL
AND #product_code_old IS NOT NULL
INSERT INTO Product_Function_Staging VALUES (#product_code_new,CURRENT_TIMESTAMP);
END;
This part of code looks suspicious to me..
SELECT #product_code_new = product_code FROM Inserted where ISNULL(ceu_amount,0) > 0;
SELECT #product_code_old = product_code FROM Deleted where ISNULL(ceu_amount,0) = 0;
IF #product_code_new IS NOT NULL
AND #product_code_old IS NOT NULL
INSERT INTO Product_Function_Staging VALUES (#product_code_new,CURRENT_TIMESTAMP);
The above will work fine ,if there is only one row updated,what if there is more than one value..the product_code will default to last value
You can change the above part of code to below
Insert into Product_Function_Staging
select product_code ,CURRENT_TIMESTAMP from inserted where product_code is not null
You will get undetermined values for #product_code_new if there are more than one rows updated with ceu_amount>0; Similar for #product_code_old if more than one rows updated with ceu_amount NULL or equal 0.
Can you post some sample data?
I would not use variables like that in a trigger, since what causes the trigger could be an update to more than one row, at which point you would have multiple rows in your updated and deleted tables.
I think we can more safely and efficiently make this insert with one simple query, though I'm assuming you have a unique key to use:
CREATE TRIGGER [dbo].[TRG_Product_Function_Modified] ON [dbo].[Product_Function]
AFTER UPDATE
AS
BEGIN
SET NOCOUNT ON;
INSERT INTO Product_Function_Staging
SELECT i.product_code, CURRENT_TIMESTAMP
FROM inserted i
JOIN deleted d ON i.product_code = d.product_code -- assuming product_code is unique
WHERE i.ceu_amount > 0 -- new value > 0
AND ISNULL(d.ceu_amount, 0) = 0; -- old value null or 0
END;
I'm not sure where you need to check for nulls in your data, so I've made a best guess in the where clause.
Try using this
CREATE TRIGGER [dbo].[Customer_UPDATE]
ON [dbo].[Customers]
AFTER UPDATE
AS
BEGIN
SET NOCOUNT ON;
DECLARE #CustomerId INT
DECLARE #Action VARCHAR(50)
SELECT #CustomerId = INSERTED.CustomerId
FROM INSERTED
IF UPDATE(Name)
BEGIN
SET #Action = 'Updated Name'
END
IF UPDATE(Country)
BEGIN
SET #Action = 'Updated Country'
END
INSERT INTO CustomerLogs
VALUES(#CustomerId, #Action)
END

How to update large table with millions of rows in SQL Server?

I've an UPDATE statement which can update more than million records. I want to update them in batches of 1000 or 10000. I tried with ##ROWCOUNT but I am unable to get desired result.
Just for testing purpose what I did is, I selected table with 14 records and set a row count of 5. This query is supposed to update records in 5, 5 and 4 but it just updates first 5 records.
Query - 1:
SET ROWCOUNT 5
UPDATE TableName
SET Value = 'abc1'
WHERE Parameter1 = 'abc' AND Parameter2 = 123
WHILE ##ROWCOUNT > 0
BEGIN
SET rowcount 5
UPDATE TableName
SET Value = 'abc1'
WHERE Parameter1 = 'abc' AND Parameter2 = 123
PRINT (##ROWCOUNT)
END
SET rowcount 0
Query - 2:
SET ROWCOUNT 5
WHILE (##ROWCOUNT > 0)
BEGIN
BEGIN TRANSACTION
UPDATE TableName
SET Value = 'abc1'
WHERE Parameter1 = 'abc' AND Parameter2 = 123
PRINT (##ROWCOUNT)
IF ##ROWCOUNT = 0
BEGIN
COMMIT TRANSACTION
BREAK
END
COMMIT TRANSACTION
END
SET ROWCOUNT 0
What am I missing here?
You should not be updating 10k rows in a set unless you are certain that the operation is getting Page Locks (due to multiple rows per page being part of the UPDATE operation). The issue is that Lock Escalation (from either Row or Page to Table locks) occurs at 5000 locks. So it is safest to keep it just below 5000, just in case the operation is using Row Locks.
You should not be using SET ROWCOUNT to limit the number of rows that will be modified. There are two issues here:
It has that been deprecated since SQL Server 2005 was released (11 years ago):
Using SET ROWCOUNT will not affect DELETE, INSERT, and UPDATE statements in a future release of SQL Server. Avoid using SET ROWCOUNT with DELETE, INSERT, and UPDATE statements in new development work, and plan to modify applications that currently use it. For a similar behavior, use the TOP syntax
It can affect more than just the statement you are dealing with:
Setting the SET ROWCOUNT option causes most Transact-SQL statements to stop processing when they have been affected by the specified number of rows. This includes triggers. The ROWCOUNT option does not affect dynamic cursors, but it does limit the rowset of keyset and insensitive cursors. This option should be used with caution.
Instead, use the TOP () clause.
There is no purpose in having an explicit transaction here. It complicates the code and you have no handling for a ROLLBACK, which isn't even needed since each statement is its own transaction (i.e. auto-commit).
Assuming you find a reason to keep the explicit transaction, then you do not have a TRY / CATCH structure. Please see my answer on DBA.StackExchange for a TRY / CATCH template that handles transactions:
Are we required to handle Transaction in C# Code as well as in Store procedure
I suspect that the real WHERE clause is not being shown in the example code in the Question, so simply relying upon what has been shown, a better model (please see note below regarding performance) would be:
DECLARE #Rows INT,
#BatchSize INT; -- keep below 5000 to be safe
SET #BatchSize = 2000;
SET #Rows = #BatchSize; -- initialize just to enter the loop
BEGIN TRY
WHILE (#Rows = #BatchSize)
BEGIN
UPDATE TOP (#BatchSize) tab
SET tab.Value = 'abc1'
FROM TableName tab
WHERE tab.Parameter1 = 'abc'
AND tab.Parameter2 = 123
AND tab.Value <> 'abc1' COLLATE Latin1_General_100_BIN2;
-- Use a binary Collation (ending in _BIN2, not _BIN) to make sure
-- that you don't skip differences that compare the same due to
-- insensitivity of case, accent, etc, or linguistic equivalence.
SET #Rows = ##ROWCOUNT;
END;
END TRY
BEGIN CATCH
RAISERROR(stuff);
RETURN;
END CATCH;
By testing #Rows against #BatchSize, you can avoid that final UPDATE query (in most cases) because the final set is typically some number of rows less than #BatchSize, in which case we know that there are no more to process (which is what you see in the output shown in your answer). Only in those cases where the final set of rows is equal to #BatchSize will this code run a final UPDATE affecting 0 rows.
I also added a condition to the WHERE clause to prevent rows that have already been updated from being updated again.
NOTE REGARDING PERFORMANCE
I emphasized "better" above (as in, "this is a better model") because this has several improvements over the O.P.'s original code, and works fine in many cases, but is not perfect for all cases. For tables of at least a certain size (which varies due to several factors so I can't be more specific), performance will degrade as there are fewer rows to fix if either:
there is no index to support the query, or
there is an index, but at least one column in the WHERE clause is a string data type that does not use a binary collation, hence a COLLATE clause is added to the query here to force the binary collation, and doing so invalidates the index (for this particular query).
This is the situation that #mikesigs encountered, thus requiring a different approach. The updated method copies the IDs for all rows to be updated into a temporary table, then uses that temp table to INNER JOIN to the table being updated on the clustered index key column(s). (It's important to capture and join on the clustered index columns, whether or not those are the primary key columns!).
Please see #mikesigs answer below for details. The approach shown in that answer is a very effective pattern that I have used myself on many occasions. The only changes I would make are:
Explicitly create the #targetIds table rather than using SELECT INTO...
For the #targetIds table, declare a clustered primary key on the column(s).
For the #batchIds table, declare a clustered primary key on the column(s).
For inserting into #targetIds, use INSERT INTO #targetIds (column_name(s)) SELECT and remove the ORDER BY as it's unnecessary.
So, if you don't have an index that can be used for this operation, and can't temporarily create one that will actually work (a filtered index might work, depending on your WHERE clause for the UPDATE query), then try the approach shown in #mikesigs answer (and if you use that solution, please up-vote it).
WHILE EXISTS (SELECT * FROM TableName WHERE Value <> 'abc1' AND Parameter1 = 'abc' AND Parameter2 = 123)
BEGIN
UPDATE TOP (1000) TableName
SET Value = 'abc1'
WHERE Parameter1 = 'abc' AND Parameter2 = 123 AND Value <> 'abc1'
END
I encountered this thread yesterday and wrote a script based on the accepted answer. It turned out to perform very slowly, taking 12 hours to process 25M of 33M rows. I wound up cancelling it this morning and working with a DBA to improve it.
The DBA pointed out that the is null check in my UPDATE query was using a Clustered Index Scan on the PK, and it was the scan that was slowing the query down. Basically, the longer the query runs, the further it needs to look through the index for the right rows.
The approach he came up with was obvious in hind sight. Essentially, you load the IDs of the rows you want to update into a temp table, then join that onto the target table in the update statement. This uses an Index Seek instead of a Scan. And ho boy does it speed things up! It took 2 minutes to update the last 8M records.
Batching Using a Temp Table
SET NOCOUNT ON
DECLARE #Rows INT,
#BatchSize INT,
#Completed INT,
#Total INT,
#Message nvarchar(max)
SET #BatchSize = 4000
SET #Rows = #BatchSize
SET #Completed = 0
-- #targetIds table holds the IDs of ALL the rows you want to update
SELECT Id into #targetIds
FROM TheTable
WHERE Foo IS NULL
ORDER BY Id
-- Used for printing out the progress
SELECT #Total = ##ROWCOUNT
-- #batchIds table holds just the records updated in the current batch
CREATE TABLE #batchIds (Id UNIQUEIDENTIFIER);
-- Loop until #targetIds is empty
WHILE EXISTS (SELECT 1 FROM #targetIds)
BEGIN
-- Remove a batch of rows from the top of #targetIds and put them into #batchIds
DELETE TOP (#BatchSize)
FROM #targetIds
OUTPUT deleted.Id INTO #batchIds
-- Update TheTable data
UPDATE t
SET Foo = 'bar'
FROM TheTable t
JOIN #batchIds tmp ON t.Id = tmp.Id
WHERE t.Foo IS NULL
-- Get the # of rows updated
SET #Rows = ##ROWCOUNT
-- Increment our #Completed counter, for progress display purposes
SET #Completed = #Completed + #Rows
-- Print progress using RAISERROR to avoid SQL buffering issue
SELECT #Message = 'Completed ' + cast(#Completed as varchar(10)) + '/' + cast(#Total as varchar(10))
RAISERROR(#Message, 0, 1) WITH NOWAIT
-- Quick operation to delete all the rows from our batch table
TRUNCATE TABLE #batchIds;
END
-- Clean up
DROP TABLE IF EXISTS #batchIds;
DROP TABLE IF EXISTS #targetIds;
Batching the slow way, do not use!
For reference, here is the original slower performing query:
SET NOCOUNT ON
DECLARE #Rows INT,
#BatchSize INT,
#Completed INT,
#Total INT
SET #BatchSize = 4000
SET #Rows = #BatchSize
SET #Completed = 0
SELECT #Total = COUNT(*) FROM TheTable WHERE Foo IS NULL
WHILE (#Rows = #BatchSize)
BEGIN
UPDATE t
SET Foo = 'bar'
FROM TheTable t
JOIN #batchIds tmp ON t.Id = tmp.Id
WHERE t.Foo IS NULL
SET #Rows = ##ROWCOUNT
SET #Completed = #Completed + #Rows
PRINT 'Completed ' + cast(#Completed as varchar(10)) + '/' + cast(#Total as varchar(10))
END
This is a more efficient version of the solution from #Kramb. The existence check is redundant as the update where clause already handles this. Instead you just grab the rowcount and compare to batchsize.
Also note #Kramb solution didn't filter out already updated rows from the next iteration hence it would be an infinite loop.
Also uses the modern batch size syntax instead of using rowcount.
DECLARE #batchSize INT, #rowsUpdated INT
SET #batchSize = 1000;
SET #rowsUpdated = #batchSize; -- Initialise for the while loop entry
WHILE (#batchSize = #rowsUpdated)
BEGIN
UPDATE TOP (#batchSize) TableName
SET Value = 'abc1'
WHERE Parameter1 = 'abc' AND Parameter2 = 123 and Value <> 'abc1';
SET #rowsUpdated = ##ROWCOUNT;
END
I want share my experience. A few days ago I have to update 21 million records in table with 76 million records. My colleague suggested the next variant.
For example, we have the next table 'Persons':
Id | FirstName | LastName | Email | JobTitle
1 | John | Doe | abc1#abc.com | Software Developer
2 | John1 | Doe1 | abc2#abc.com | Software Developer
3 | John2 | Doe2 | abc3#abc.com | Web Designer
Task: Update persons to the new Job Title: 'Software Developer' -> 'Web Developer'.
1. Create Temporary Table 'Persons_SoftwareDeveloper_To_WebDeveloper (Id INT Primary Key)'
2. Select into temporary table persons which you want to update with the new Job Title:
INSERT INTO Persons_SoftwareDeveloper_To_WebDeveloper SELECT Id FROM
Persons WITH(NOLOCK) --avoid lock
WHERE JobTitle = 'Software Developer'
OPTION(MAXDOP 1) -- use only one core
Depends on rows count, this statement will take some time to fill your temporary table, but it would avoid locks. In my situation it took about 5 minutes (21 million rows).
3. The main idea is to generate micro sql statements to update database. So, let's print them:
DECLARE #i INT, #pagesize INT, #totalPersons INT
SET #i=0
SET #pagesize=2000
SELECT #totalPersons = MAX(Id) FROM Persons
while #i<= #totalPersons
begin
Print '
UPDATE persons
SET persons.JobTitle = ''ASP.NET Developer''
FROM Persons_SoftwareDeveloper_To_WebDeveloper tmp
JOIN Persons persons ON tmp.Id = persons.Id
where persons.Id between '+cast(#i as varchar(20)) +' and '+cast(#i+#pagesize as varchar(20)) +'
PRINT ''Page ' + cast((#i / #pageSize) as varchar(20)) + ' of ' + cast(#totalPersons/#pageSize as varchar(20))+'
GO
'
set #i=#i+#pagesize
end
After executing this script you will receive hundreds of batches which you can execute in one tab of MS SQL Management Studio.
4. Run printed sql statements and check for locks on table. You always can stop process and play with #pageSize to speed up or speed down updating(don't forget to change #i after you pause script).
5. Drop Persons_SoftwareDeveloper_To_AspNetDeveloper. Remove temporary table.
Minor Note: This migration could take a time and new rows with invalid data could be inserted during migration. So, firstly fix places where your rows adds. In my situation I fixed UI, 'Software Developer' -> 'Web Developer'.
More about this method on my blog https://yarkul.com/how-smoothly-insert-millions-of-rows-in-sql-server/
Your print is messing things up, because it resets ##ROWCOUNT. Whenever you use ##ROWCOUNT, my advice is to always set it immediately to a variable. So:
DECLARE #RC int;
WHILE #RC > 0 or #RC IS NULL
BEGIN
SET rowcount 5;
UPDATE TableName
SET Value = 'abc1'
WHERE Parameter1 = 'abc' AND Parameter2 = 123 AND Value <> 'abc1';
SET #RC = ##ROWCOUNT;
PRINT(##ROWCOUNT)
END;
SET rowcount = 0;
And, another nice feature is that you don't need to repeat the update code.
First of all, thank you all for your inputs. I tweak my Query - 1 and got my desired result. Gordon Linoff is right, PRINT was messing up my query so I modified it as following:
Modified Query - 1:
SET ROWCOUNT 5
WHILE (1 = 1)
BEGIN
BEGIN TRANSACTION
UPDATE TableName
SET Value = 'abc1'
WHERE Parameter1 = 'abc' AND Parameter2 = 123
IF ##ROWCOUNT = 0
BEGIN
COMMIT TRANSACTION
BREAK
END
COMMIT TRANSACTION
END
SET ROWCOUNT 0
Output:
(5 row(s) affected)
(5 row(s) affected)
(4 row(s) affected)
(0 row(s) affected)

IF statement in SQL Server trigger with condition

I created a table student and I want to create a trigger to stop adding
teacher id after 6 to any student so every 5 student have only 1 teacher.
Teacher id in student table is a foreign key so I can repeat in student table
5 times and when I insert 6.. don't agree I try this code but it does not work
The student table looks like this:
CREATE TABLE student
(
s_id int PRIMARY key,
s_name varchar(50) ,
birthday date ,
t_id int
)
And the trigger:
CREATE TRIGGER tr_student
on student
DECLARE
#count int,
#s_id int,
#s_name nvarchar(50),
#birthdate date,
#t_id int
Begin
select #s_id = i.s_id from inserted i;
select #s_name = i.s_name from inserted i;
select #birthdate = i.birthday from inserted i;
select #t_id = i.t_id from inserted i;
Select count(*) t_id
From student
Where #count = t_id
If #count < 6
Insert into student(s_id, s_name, birthday, t_id)
values(#s_id, #s_name, #birthdate, #count)
PRINT 'AFTER INSERT trigger fired'
End
Your fundamental flaw is that you seem to expect the trigger to be fired once per row - this is NOT the case in SQL Server. Instead, the trigger fires once per statement, and the pseudo table Inserted might contain multiple rows.
Given that that table might contain multiple rows - which one do you expect will be selected here??
select #s_id = i.s_id from inserted i;
select #s_name = i.s_name from inserted i;
select #birthdate = i.birthday from inserted i;
select #t_id = i.t_id from inserted i;
It's undefined - you'll get the values from arbitrary rows in Inserted, and you'll ignore all other rows.
You need to rewrite your entire trigger with the knowledge the Inserted WILL contain multiple rows! You need to work with set-based operations - don't expect just a single row in Inserted !
Furthermore: the trigger fires AFTER the insert has already happened, so your code here:
If #count < 6
Insert into student(s_id, s_name, birthday, t_id)
values(#s_id, #s_name, #birthdate, #count)
basically inserts the same data again - certainly not what you want.
So you need to
rewrite your trigger to properly deal with multiple rows in Inserted
either switch to a INSTEAD OF INSERT trigger, or change your trigger logic

Resources