SQL Server Trigger loop - sql-server

I would like to know if there is anyway I can add a trigger on two tables that will replicate the data to the other.
For example:
I have a two users tables, users_V1 and users_V2, When a user is updated with one of the V1 app, it activate a trigger updating it in users_V2 as well.
If I want to add the same trigger on the V2 table in order to update the data in V1 when a user is updated in V2, will it go into an infinite loop? Is there any way to avoid that.

I don't recommend explicitly disabling the trigger during processing - this can cause strange side-effects.
The most reliable way to detect (and prevent) cycles in a trigger is to use CONTEXT_INFO().
Example:
CREATE TRIGGER tr_Table1_Update
ON Table1
FOR UPDATE AS
DECLARE #ctx VARBINARY(128)
SELECT #ctx = CONTEXT_INFO()
IF #ctx = 0xFF
RETURN
SET #ctx = 0xFF
-- Trigger logic goes here
See this link for a more detailed example.
Note on CONTEXT_INFO() in SQL Server 2000:
Context info is supported but apparently the CONTEXT_INFO function is not. You have to use this instead:
SELECT #ctx = context_info
FROM master.dbo.sysprocesses
WHERE spid = ##SPID

Either use TRIGGER_NESTLEVEL() to restrict trigger recursion, or
check the target table whether an UPDATE is necessary at all:
IF (SELECT COUNT(1)
FROM users_V1
INNER JOIN inserted ON users_V1.ID = inserted.ID
WHERE users_V1.field1 <> inserted.field1
OR users_V1.field2 <> inserted.field2) > 0 BEGIN
UPDATE users_V1 SET ...

I had the exact same problem. I tried using CONTEXT_INFO() but that is a session variable and so it works only the first time! Then next time a trigger fires during the session, this won't work. So I ended up with using a variable that returns Nest Level in each of the affected triggers to exit.
Example:
CREATE TRIGGER tr_Table1_Update
ON Table1
FOR UPDATE AS
BEGIN
--Prevents Second Nested Call
IF ##NESTLEVEL>1 RETURN
--Trigger logic goes here
END
Note: Or use ##NESTLEVEL>0 if you want to stop all nested calls
One other note -- There seems to be much confusion in this article about nested calls and recursive calls. The original poster was referring to a nested trigger where one trigger would cause another trigger to fire, which would cause the first trigger to fire again, and so on. This is Nested, but according to SQL Server, not recursive because the trigger is not calling/triggering itself directly. Recursion is NOT where "one trigger [is] calling another". That is nested, but not necessarily recursive. You can test this by enabling/disabling recursion and nesting with some settings mentioned here: blog post on nesting

I'm with the no triggers camp for this particular design scenario. Having said that, with the limited knowledge I have about what your app does and why it does it, here's my overall analysis:
Using a trigger on a table has an advantage of being able to act on all actions on the table. That's it, your main benefit in this case. But that would mean you have users with direct access to the table or multiple access points to the table. I tend to avoid that. Triggers have their place (I use them a lot), but it's one of the last database design tools I use because they tend to not know a lot about their context (generally, a strength) and when used in a place where they do need to know about different contexts and overall use cases, their benefits are weakened.
If both app versions need to trigger the same action, they should both call the same stored proc. The stored proc can ensure that all the appropriate work is done, and when your app no longer needs to support V1, then that part of the stored proc can be removed.
Calling two stored procs in your client code is a bad idea, because this is an abstraction layer of data services which the database can provide easily and consistently, without your application being worried about it.
I prefer to control the interface to the underlying tables more - with either views or UDFs or SPs. Users never get direct access to a table. Another point here is that you could present a single "users" VIEW or UDF coalescing the appropriate underlying tables without the user even knowing about - perhaps getting to the point where there is not even any "synchronization" necessary, since new attributes are in an EAV system if you need that kind of pathological flexibility or in some other different structure which can still be joined - say OUTER APPLY UDF etc.

You're going to have to create some sort of loopback detection within your trigger. Perhaps using an "if exists" statement to see if the record exists before entering it into the next table. It does sound like it will go into an infinite loop the way it's currently set up.

Avoid triggers like the plague .... use a stored procedure to add the user. If this requires some design changes then make them. Triggers are the EVIL.

Try something like (I didn;t bother with thecreate trigger stuff as you clearly already know how to write that part):
update t
set field1 = i.field1
field2 = i.field2
from inserted i
join table1 t on i.id = t.id
where field1 <> i.field1 OR field2 <> i.field2

Recursion in triggers, that is, one trigger calling another, is limited to 32 levels
In each trigger, just check if the row you wish to insert already exists.
Example
CREATE TRIGGER Table1_Synchronize_Update ON [Table1] FOR UPDATE AS
BEGIN
UPDATE Table2
SET LastName = i.LastName
, FirstName = i.FirstName
, ... -- Every relevant field that needs to stay in sync
FROM Table2 t2
INNER JOIN Inserted i ON i.UserID = t2.UserID
WHERE i.LastName <> t2.LastName
OR i.FirstName <> t2.FirstName
OR ... -- Every relevant field that needs to stay in sync
END
CREATE TRIGGER Table1_Synchronize_Insert ON [Table1] FOR INSERT AS
BEGIN
INSERT INTO Table2
SELECT i.*
FROM Inserted i
LEFT OUTER JOIN Table2 t2 ON t2.UserID = i.UserID
WHERE t2.UserID IS NULL
END
CREATE TRIGGER Table2_Synchronize_Update ON [Table2] FOR UPDATE AS
BEGIN
UPDATE Table1
SET LastName = i.LastName
, FirstName = i.FirstName
, ... -- Every relevant field that needs to stay in sync
FROM Table1 t1
INNER JOIN Inserted i ON i.UserID = t1.UserID
WHERE i.LastName <> t1.LastName
OR i.FirstName <> t1.FirstName
OR ... -- Every relevant field that needs to stay in sync
END
CREATE TRIGGER Table2_Synchronize_Insert ON [Table2] FOR INSERT AS
BEGIN
INSERT INTO Table1
SELECT i.*
FROM Inserted i
LEFT OUTER JOIN Table1 t1 ON t1.UserID = i.UserID
WHERE t1.UserID IS NULL
END

Related

Write protect some rows in a table

I have a situation where I need to write protect certain rows of a table based on some condition (actually a flag in a foreign table). Consider this schema:
CREATE TABLE Batch (
Id INT NOT NULL PRIMARY KEY,
DateCreated DATETIME NOT NULL,
Locked BIT NOT NULL
)
CREATE UNIQUE INDEX U_Batch_Locked ON Batch (Locked) WHERE Locked=0
CREATE TABLE ProtectedTable (
Id INT NOT NULL IDENTITY PRIMARY KEY,
Quantity DECIMAL(10,3) NOT NULL,
Price Money NOT NULL,
BatchId INT NULL)
ALTER TABLE ProtectedTable ADD CONSTRAINT FK_ProtectedTable_Batch FOREIGN KEY (BatchId) REFERENCES Batch(id)
I want to prevent changes to Quantity and Price if the row is linked to a Batch that has Locked=1. I also need to prevent the row from being deleted.
Note: U_Batch_Locked ensures that at most one batch can be unlocked at any time.
I have tried using triggers (yikes) but that caused more issues because the trigger rolls back the transaction. The update typically happens in a C# client that performs multiple updates (on multiple tables) in a single transaction. The client continues with the updates regardless of errors and at the end of the transaction rolls it back if any errors occurred. This way it can collect all/most of the constraint violations and let the user fix them before trying to save the changes again. However, since the trigger rolls back the transaction when the constraint is not satisfied, subsequent updates start their own auto transactions and are in fact committed. The final rollback issued by the client at the end of the batch simply fails.
I have also found this blog post: Using ROWVERSION to enforce business rules, which although seems to be what I need it requires the foreign key to have the opposite direction (i.e the protected table is the parent in the parent-child relationship while in my case the protected table is the child)
Has anyone done something like this before? It seems like a not so uncommon business requirement, yet I have not seen proper solution yet. I know I can implement this on the client side, but that leaves room for error: what if someone changes these using direct SQL (by mistake), what if an upgrade/migration script or the client itself has a bug and fails to enforce the constaint?
After lots of searching and trial and error, I ended up using INSTEAD OF triggers to check for the constraint and if necessary raise the error and skip the operation. Referring to the schema in the original question, here are the required triggers:
Update trigger:
CREATE TRIGGER ProtectedTable_LockU ON ProtectedTable
INSTEAD OF UPDATE
AS
SET NOCOUNT ON;
IF UPDATE(Quantity) OR UPDATE(Price)
AND EXISTS(
SELECT *
FROM inserted i INNER JOIN deleted d ON d.Id = i.Id
LEFT JOIN Batch b ON b.Id = d.BatchId
WHERE b.Locked <> 0 AND
(i.Quantity <> d. Quantity OR i.Price <> d.Price))
BEGIN
RAISERRROR('[CK_ProtectedTable_DataLocked]: Attempted to update locked data', 16, 1)
RETURN
END
UPDATE pt
SET Quantity = i.Quantity,
Price = i.Price,
BatchId = i.BatchId
FROM ProtectedTable pt
INNER JOIN deleted d ON d.Id = pt.ID
INNER JOIN inserted i ON i.Id = d.Id
Delete trigger:
CREATE TRIGGER ProtectedTable_LockD ON ProtectedTable
INSTEAD OF DELETE
AS
SET NOCOUNT ON;
IF EXISTS(
SELECT *
FROM deleted d
LEFT JOIN Batch b ON b.Id = d.BatchId
WHERE b.Locked <> 0)
BEGIN
RAISERRROR('[CK_ProtectedTable_DataLocked]: Attempted to delete locked data', 16, 1)
RETURN
END
DELETE pt
FROM ProtectedTable pt
INNER JOIN deleted d ON d.Id = pt.ID
Of course one could create a view and put the triggers on the view. That way you could also avoid the conflict between INSTEAD OF triggers and FOREIGN KEY ON DELETE/ON UPDATE rules.
It's ugly, I don't like it at all, but it's the only solution that actually works and behaves more or less like a regular database constraint (database constraints are checked before modifying the data). I have not yet tested this extensively and I am not sure there are no issues with it. I am also worried about race conditions (e.g. what if Batch is modified during the execution of the trigger? should I / can I include lock hints to ensure integrity?)
PS: Regardless of the bad application design, this is still a reasonable requirement. In certain cases it is also a legal one (you have to prove there is no way certain records can be altered after some conditions are met). That is why the question pops up now and then in SO, and there is no definite solution. I find this solution better than using AFTER triggers as this behaves more like a normal database constraint.
Hopefully this may help others in similar situations.
Rather than granting users access to the table directly, create a simple VIEW using the CHECK OPTION option and only grant users permissions to apply modifications via that view. Something like:
CREATE VIEW BatchData
WITH SCHEMABINDING
WITH CHECK OPTION
AS
SELECT pt.Id, pt.Quantity, pt.Price, pt.BatchId
FROM
dbo.ProtectedTable pt
INNER JOIN
dbo.Batch b
ON
pt.BatchId = b.Id
WHERE
b.Locked = 0
So long as all inserts, updates and deleted are channelled through this view (which as I say above, you constrain via permissions) then they can only be applied to the open batch

What are the advantages of MERGE over simple IF EXISTS?

I want to know what are the advantages of MERGE over simply using IF EXISTS. Which is the suggested approach? Does MERGE performs Update and Insert row-by-row matching conditions? If yes, is it similar to Cursors?
MERGE combines INSERT, UPDATE and DELETE logic into one DML statement, and therefore is atomic. If you are doing single row UPSERTS then the advantages are less obvious. For example, a naive implementation of an UPSERT may look like the following:
IF EXISTS (SELECT * FROM t1 where id=#id)
UPDATE t1 SET ... WHERE id=#id
ELSE
INSERT INTO t1 (...) VALUES (...)
However, without wrapping this in a transaction, it is possible that the row we're going to update will be deleted between the SELECT and the UPDATE. Adding minimal logic to address that issue give us this:
BEGIN TRAN
IF EXISTS (SELECT * FROM t1 WITH (HOLDLOCK, UPDLOCK) where id=#id )
UPDATE t1 SET ... WHERE id=#id
ELSE
INSERT INTO t1 (...) VALUES (...)
COMMIT
This logic isn't necessary with the MERGE statement.
There are no comparisons that should be drawn between CURSORS and the MERGE statement.
Merge will give you the option of updating, inserting and deleting data in a target table where is it matched in a source table. It is a set based operation so is not like a cursor (row by row)
I am not sure how you mean by advantages over 'IF EXISTS', but merge is a useful and flexible way of synchronizing 2 tables
this is a useful resource for merge https://www.simple-talk.com/sql/learn-sql-server/the-merge-statement-in-sql-server-2008/

Can I Select and Update at the same time?

This is an over-simplified explanation of what I'm working on.
I have a table with status column. Multiple instances of the application will pull the contents of the first row with a status of NEW, update the status to WORKING and then go to work on the contents.
It's easy enough to do this with two database calls; first the SELECT then the UPDATE. But I want to do it all in one call so that another instance of the application doesn't pull the same row. Sort of like a SELECT_AND_UPDATE thing.
Is a stored procedure the best way to go?
You could use the OUTPUT statement.
DECLARE #Table TABLE (ID INTEGER, Status VARCHAR(32))
INSERT INTO #Table VALUES (1, 'New')
INSERT INTO #Table VALUES (2, 'New')
INSERT INTO #Table VALUES (3, 'Working')
UPDATE #Table
SET Status = 'Working'
OUTPUT Inserted.*
FROM #Table t1
INNER JOIN (
SELECT TOP 1 ID
FROM #Table
WHERE Status = 'New'
) t2 ON t2.ID = t1.ID
Sounds like a queue processing scenario, whereby you want one process only to pick up a given record.
If that is the case, have a look at the answer I provided earlier today which describes how to implement this logic using a transaction in conjunction with UPDLOCK and READPAST table hints:
Row locks - manually using them
Best wrapped up in sproc.
I'm not sure this is what you are wanting to do, hence I haven't voted to close as duplicate.
Not quite, but you can SELECT ... WITH (UPDLOCK), then UPDATE.. subsequently. This is as good as an atomic operation as it tells the database that you are about to update what you previously selected, so it can lock those rows, preventing collisions with other clients. Under Oracle and some other database (MySQL I think) the syntax is SELECT ... FOR UPDATE.
Note: I think you'll need to ensure the two statements happen within a transaction for it to work.
You should do three things here:
Lock the row you're working on
Make sure that this and only this row is locked
Do not wait for the locked records: skip the the next ones instead.
To do this, you just issue this:
SELECT TOP 1 *
FROM mytable (ROWLOCK, UPDLOCK, READPAST)
WHERE status = 'NEW'
ORDER BY
date
UPDATE …
within a transaction.
A stored procedure is the way to go. You need to look at transactions. Sql server was born for this kind of thing.
Yes, and maybe use the rowlock hint to keep it isolated from the other threads, eg.
UPDATE
Jobs WITH (ROWLOCK, UPDLOCK, READPAST)
SET Status = 'WORKING'
WHERE JobID =
(SELECT Top 1 JobId FROM Jobs WHERE Status = 'NEW')
EDIT: Rowlock would be better as suggested by Quassnoi, but the same idea applies to do the update in one query.

is there anyway to cache data that can be used in a SQL server db trigger

i have an orders table that has a userID column
i have a user table that has id, name,
i would like to have a database trigger that shows the insert, update or delete by name.
so i wind up having to do this join between these two tables on every single db trigger. I would think it would be better if i can one query upfront to map users to Ids and then reuse that "lookup " on my triggers . . is this possible?
DECLARE #oldId int
DECLARE #newId int
DECLARE #oldName VARCHAR(100)
DECLARE #newName VARCHAR(100)
SELECT #oldId = (SELECT user_id FROM Deleted)
SELECT #newId = (SELECT user_id FROM Inserted)
SELECT #oldName = (SELECT name FROM users where id = #oldId)
SELECT #newName = (SELECT name FROM users where id = #newId)
INSERT INTO History(id, . . .
Good news, you are already are using a cache! Your SELECT name FROM users WHERE id = #id is going to fetch the name for the buffer pool cached pages. Believe you me, you won't be able to construct a better tuned, higher scale and faster cache than that.
Result caching may make sense in the client, where one can avoid the roundtrip to the database altogether. Or it may be valuable to cache some complex and long running query result. But inside a stored proc/trigger there is absolutely no value in caching a simple index lookup result.
How about you turn on Change Data Capture, and then get rid of all this code?
Edited to add the rest:
Actually, if you're considering the possibility of a scalar function to fetch the username, then don't. That's really bad because of the problems of scalar functions being procedural. You'd be better off with something like:
INSERT dbo.History (id, ...)
SELECT i.id, ...
FROM inserted i
JOIN deleted d ON d.id = i.id
JOIN dbo.users u ON u.user_id = i.user_id;
As user_id is unique, and you have a FK whenever it's used, it shouldn't be a major problem. But yes, you need to repeat this logic in every trigger. If you don't want to repeat the logic, then use Change Data Capture in SQL 2008.

How to test for multiple row actions in a SQL Server trigger?

My kindergarten SQL Server taught me that a trigger may be fired with multiple rows in the inserted and deleted pseudo tables. I mostly write my trigger code with this in mind, often resulting in some cursor based cludge. Now I'm really only able to test them firing for a single row at a time. How can I generate a multirow trigger and will SQL Server actually ever send a multirow trigger? Can I set a flag so that SQL Server will only fire single row triggers??
Trigger definitions should always handle multiple rows.
Taken from SQLTeam:
-- BAD Trigger code following:
CREATE TRIGGER trg_Table1
ON Table1
For UPDATE
AS
DECLARE #var1 int, #var2 varchar(50)
SELECT #var1 = Table1_ID, #var2 = Column2
FROM inserted
UPDATE Table2
SET SomeColumn = #var2
WHERE Table1_ID = #var1
The above trigger will only work for the last row in the inserted table.
This is how you should implement it:
CREATE TRIGGER trg_Table1
ON Table1
FOR UPDATE
AS
UPDATE t2
SET SomeColumn = i.SomeColumn
FROM Table2 t2
INNER JOIN inserted i
ON t2.Table1_ID = i.Table1_ID
Yes, if a statement affects more than one row, it should be handled by a single trigger call, as you might want to revert the whole transaction. It is not possible to split it to separate trigger calls logically and I don't think SQL Server provides such a flag. You can make SQL Server call your trigger with multiple rows by issuing an UPDATE or DELETE statement that affects multiple rows.
First it concerns me that you are making the triggers handle multiple rows by using a cursor. Do not do that! Use a set-based statment instead jioining to the inserted or deleted pseudotables. Someone put one of those cursor based triggerson our database before I came to work here. It took over forty minutes to handle a 400,00 record insert (and I often have to do inserts of over 100,000 records to this table for one client). Changing it to a set-based solution changed the time to less than a minute. While all triggers must be capable of handling multiple rows, you must not do so by creating a performance nightmare.
If you can write a select statment for the cusor, you can write an insert, update or delete based on the same select statment which is set-based.
I've always written my triggers to handle multiple rows, it was my understanding that if a single query inserted/updated/deleted multiple rows then only one trigger would fire and as such you would have to use a cursor to move through the records one by one.
One SQL statement always invokes one trigger execution - that's part of the definition of a trigger. (It's also a circumstance that seems to at least once trip up everyone who writes a trigger.) I believe you can discover how many records are being affected by inspecting ##ROWCOUNT.

Resources