Committing Transaction at the Beginning of Trigger Logic - sql-server

I have a trigger on a table (AFTER INSERT, UPDATE) which exists to compute values based on what has been added or updated to the table, and store them in a separate table.
I've read quite a bit around error handling inside triggers, and the fact that errors in my trigger logic will ROLLBACK the original transaction which called the trigger in the first place.
In order to preserve the original transaction at all costs (assuming that at some point, the trigger could fail) I began my code by doing something like this;
-- Grab newly inserted data
SELECT *
INTO #Temp
FROM INSERTED
-- Force transaction to finish, making sure following statements don't roll it back
COMMIT TRAN
-- Continue using data stored in #Temp
UPDATE .....
SET .....
FROM #Temp
This worked, even when I put intentional errors in the trigger logic, the question, is this safe?

I don't know whether it is safe or not. But as an alternative, depending on what exactly you are doing, you could do your insert or update and then receive the inserted data with using OUTPUT and use these data to do your trigger logic. Like this you wouldn't need a trigger, but really depends on your environment (it might be a little bit slower like this than the trigger approach):
-- update row status (in progress)
update Staging.GFTIntradayLivePricesStaging
set Status = 1 -- in progress
output INSERTED.GFTIntradayLivePricesStagingID into #RowsTransfered
where GFTIntradayLivePricesStaging.Status = 0 -- not processed
and exists
(
select top 1 1
from Portfolio.Objects.Objects
where GFTInstrumentID = GFTIntradayLivePricesStaging.GFTInstrumentID
)
About output:
https://msdn.microsoft.com/en-us/library/ms177564.aspx

Related

MSSQL Trigger - Issue

I have a SQL trigger on a table, which will fire after insert, update and delete.
I insert all the affected records in a separate physical table with codes defining the state of update. Following code snippet is the trigger defined.
CREATE TRIGGER [dbo].[DATA_CACHE]
ON [dbo].[DATA_USAGE]
for Insert,Update,Delete
AS
BEGIN
if(select COUNT(*) from inserted)>0
begin
if (select COUNT(*) from deleted)>0
BEGIN
--update
INSERT INTO CACHE_UPDATE_TABLE (CODE, ID, DATE, COUNT)
SELECT 2, ins.ID, ins.DATE, ins.COUNT
from inserted ins
END
else
begin
-- insert
INSERT INTO CACHE_UPDATE_TABLE (CODE, ID, DATE, COUNT)
SELECT 1, ins.ID, ins.DATE, ins.COUNT
from inserted ins
end
END
else
BEGIN
-- delete
INSERT INTO CACHE_UPDATE_TABLE (CODE, ID, DATE, COUNT)
SELECT 3, del.ID, del.DATE, del.COUNT
from deleted del
end
END
SELECT * FROM CACHE_UPDATE_TABLE
As you can see in the above trigger i had added an additional statement after the trigger by MISTAKE, selecting all values from the target table. This statement was after the defined trigger, however when i tried to alter the trigger, by right clicking on trigger and selecting modify, it also showed me the select statement after the end block of trigger.
Does this mean, every time the trigger is fired this select statement executes ? this is my first question (Question A) - May be a silly one, but i am a little confused about this.
My second question is (Question B) I encounter locking issue on the CACHE_UPDATE_TABLE, could this be the reason for locking? Also there is a SQL job which runs every one minute to check the CACHE_UPDATE_TABLE table, and then i perform some operation(linked server related) and delete these records from CACHE_UPDATE_TABLE after i am done. Locking Issue could be because of this?? and if so, how do i counter it?
My third question is (Question C) Is this the best way to do this operation using triggers or can i do it some other way? Is the trigger defined proper?
-Any help will be appreciated... Thanks.
You've got a lot of different questions in there which is probably why you've not received any answers, but I'll cover what I can.
A) That's quite an interesting question actually. I would have assumed that it would do nothing - It'd be executed when you create the trigger but then wouldn't be part of the trigger - however I've noticed odd behaviour with this before so I tested with a simple stored procedure:
CREATE PROCEDURE dbo.test ( #i INT ) AS
BEGIN
SELECT #i
END;
SELECT 'hi'
GO
Executing the stored procedure causes the SELECT 'hi' to fire as well as the SELECT #i. I still don't have an answer for your question, but I would definitely make sure not to have any stray SQL outside the trigger when you create it for this reason alone.
I've just investigated this a little more and apparently the end of the stored procedure is wherever the first GO is after the procedure (which SQL Server automatically adds to the end if you don't use one). So you could define your whole procedure after the END - you can still use the parameters too.
This seems to be because the BEGIN and END aren't a required part of the stored procedure definition - they're not actually indicating the begin and end of the stored procedure, they're just an unrelated BEGIN...END block like you might put after and IF statement. You can have as many BEGIN...END blocks as you like in the procedure definition, or none at all.
C) I would definitely change your trigger. You've massively complicated it by combining the 3 triggers without reusing any code. The only reason to combine INSERT,UPDATE and DELETE triggers is so that you don't have to duplicate code. You should either:
Have 3 separate triggers, each containing only the relevant INSERT - that way you remove all of the conditional logic.
Keep them together but work out only the CODE using some conditional logic and have only 1 INSERT statement.
I'd be tempted to go with the 3 separate triggers, or at least an separate out the delete trigger, and then use CASE del.ID IS NULL THEN 1 ELSE 2 END for the CODE on the INSERT/UPDATE trigger. But you could combine them with (untested):
INSERT INTO CACHE_UPDATE_TABLE (CODE, ID, DATE, COUNT)
SELECT CASE WHEN del.ID IS NULL THEN 1
WHEN ins.ID IS NULL THEN 3
ELSE 2 END
,ISNULL(ins.ID, del.ID)
,ISNULL(ins.DATE, del.DATE)
,ISNULL(ins.COUNT, del.COUNT)
FROM deleted del
FULL OUTER JOIN inserted ins ON del.ID = ins.ID
Just remove that
SELECT * FROM CACHE_UPDATE_TABLE

Using transaction on a single update statement

I am dubbing some SP at work and I have discover that whoever wrote the code used a transaction on a single update statement like this
begin transaction
*single update statment:* update table whatever with whatever
commit transaction
I understand that this is wrong because transaction is used when you want to update multiple updates.
I want to understand from the theoretical point, what are the implications of using the code as above?
Is there any difference in updating the whatever table with and without the transaction? Are there any extra locks or something?
Perhaps the transaction was included due to prior or possible future code which may involve other data. Perhaps that developer simply makes a habit of wrapping code in transactions, to be 'safe'?
But if the statement literally involves only a single update to a single row, there really is no benefit to that code being there in this case. A transaction does not necessarily 'lock' anything, though the actions performed inside it may, of course. It just makes sure that all the actions contained therein are performed all-or-nothing.
Note that a transaction is not about multiple tables, it's about multiple updates. It assures multiple updates happen all-or-none.
So if you were updating the same table twice, there would be a difference with or without the transaction. But your example shows only a single update statement, presumably updating only a single record.
In fact, it's probably pretty common that transactions encapsulate multiple updates to the same table. Imagine the following:
INSERT INTO Transactions (AccountNum, Amount) VALUES (1, 200)
INSERT INTO Transactions (AccountNum, Amount) values (2, -200)
That should be wrapped into a transaction, to assure that the money is transferred correctly. If one fails, so must the other.
I understand that this is wrong because transaction is used when you want to update multiple tables.
Not necessarily. This involves one table only - and just 2 rows:
--- transaction begin
BEGIN TRANSACTION ;
UPDATE tableX
SET Balance = Balance + 100
WHERE id = 42 ;
UPDATE tableX
SET Balance = Balance - 100
WHERE id = 73 ;
COMMIT TRANSACTION ;
--- transaction end
Hopefully your colleague's code looks more like this, otherwise SQL will issue a syntax error.
As per Ypercube's comment, there is no real purpose in placing one statement inside a transaction, but possibly this is a coding standard or similar.
begin transaction -- Increases ##TRANCOUNT to 1
update table whatever with whatever
commit transaction -- DECREMENTS ##TRANCOUNT to 0
Often, when issuing adhoc statements directly against SQL, it is a good idea to wrap your statements in a transaction, just in case something goes wrong and you need to rollback, i.e.
begin transaction -- Just in case my query goofs up
update table whatever with whatever
select ... from table ... -- check that the correct updates / deletes / inserts happened
-- commit transaction -- Only commit if the above check succeeds.

Does a rollback inside a INSERT AFTER or UPDATE AFTER trigger rollback the entire transaction

Does a rollback inside a INSERT AFTER or an UPDATE AFTER trigger rollback the entire transaction or just the current row that is the reason for trigger, and is it same with Commit ?
I tried to check it through my current projects code which uses MSTDC for transactions, and it appears as if though the complete transaction is aborted.
If a Rollback in the trigger does rollback the entire transaction, is there a workaround for to restrict it just the current rows.
I found a link for sybase on this, but nothing on sql server
Yes it will rollback the entire transaction.
It's all in the docs (see Remarks). Note the comment I've emphasised - that's pretty important I would say!!
If a ROLLBACK TRANSACTION is issued in a trigger:
All data modifications made to that point in the current transaction
are rolled back, including any made by the trigger.
The trigger continues executing any remaining statements after the
ROLLBACK statement. If any of these statements modify data, the
modifications are not rolled back. No nested triggers are fired by the
execution of these remaining statements.
The statements in the batch after the statement that fired the trigger
are not executed.
As you've already been let to know, the ROLLBACK command can't possibly be modified/tuned so that it only roll back the statements issued by the trigger.
If you do need a way to "rollback" actions performed by the trigger only, you could,
as a workaround, consider modifying your trigger in such a way that before performing the actions, the trigger makes sure those actions do not produce exceptional situations that would cause the entire transaction to rollback.
For instance, if your trigger inserts rows, add a check to make sure the new rows do not violate e.g. unique constraints (or foreign key constraints), something like this:
IF NOT EXISTS (
SELECT *
FROM TableA
WHERE … /* a condition to test if a row or rows you are about
to insert aren't going to violate any constraint */
)
BEGIN
INSERT INTO TableA …
END;
Or, if your trigger deletes rows, check if it doesn't attempt to delete rows referenced by other tables (in which case you typically need to know beforehand which tables might reference the rows):
IF NOT EXISTS (
SELECT * FROM TableB WHERE …
)
AND NOT EXISTS (
SELECT * FROM TableC WHERE …
)
AND …
BEGIN
DELETE FROM TableA WHERE …
END
Similarly, you'd need to make checks for update statements, if any.
Any rollback command will roll back everything till the ##trancount is 0 unless you specify some savepoints and it doesnt matter where you put rollback tran command.
The best way is to look into the code once again and confirm business requirement and see why do you need a rollback in trigger?

SQL Server - after insert trigger - update another column in the same table

I've got this database trigger:
CREATE TRIGGER setDescToUpper
ON part_numbers
AFTER INSERT,UPDATE
AS
DECLARE #PnumPkid int, #PDesc nvarchar(128)
SET #PnumPkid = (SELECT pnum_pkid FROM inserted)
SET #PDesc = (SELECT UPPER(part_description) FROM inserted)
UPDATE part_numbers set part_description_upper = #PDesc WHERE pnum_pkid=#PnumPkid
GO
Is this a bad idea? That is to update a column on the same table. I want it to fire for both insert and update.
It works, I'm just afraid of a cyclical situation. The update, inside the trigger, fires the trigger, and again and again. Will that happen?
Please, don't nitpick at the upper case thing. Crazy situation.
It depends on the recursion level for triggers currently set on the DB.
If you do this:
SP_CONFIGURE 'nested_triggers',0
GO
RECONFIGURE
GO
Or this:
ALTER DATABASE db_name
SET RECURSIVE_TRIGGERS OFF
That trigger above won't be called again, and you would be safe (unless you get into some kind of deadlock; that could be possible but maybe I'm wrong).
Still, I do not think this is a good idea. A better option would be using an INSTEAD OF trigger. That way you would avoid executing the first (manual) update over the DB. Only the one defined inside the trigger would be executed.
An INSTEAD OF INSERT trigger would be like this:
CREATE TRIGGER setDescToUpper ON part_numbers
INSTEAD OF INSERT
AS
BEGIN
INSERT INTO part_numbers (
colA,
colB,
part_description
) SELECT
colA,
colB,
UPPER(part_description)
) FROM
INSERTED
END
GO
This would automagically "replace" the original INSERT statement by this one, with an explicit UPPER call applied to the part_description field.
An INSTEAD OF UPDATE trigger would be similar (and I don't advise you to create a single trigger, keep them separated).
Also, this addresses #Martin comment: it works for multirow inserts/updates (your example does not).
Another option would be to enclose the update statement in an IF statement and call TRIGGER_NESTLEVEL() to restrict the update being run a second time.
CREATE TRIGGER Table_A_Update ON Table_A AFTER UPDATE
AS
IF ((SELECT TRIGGER_NESTLEVEL()) < 2)
BEGIN
UPDATE a
SET Date_Column = GETDATE()
FROM Table_A a
JOIN inserted i ON a.ID = i.ID
END
When the trigger initially runs the TRIGGER_NESTLEVEL is set to 1 so the update statement will be executed. That update statement will in turn fire that same trigger except this time the TRIGGER_NESTLEVEL is set to 2 and the update statement will not be executed.
You could also check the TRIGGER_NESTLEVEL first and if its greater than 1 then call RETURN to exit out of the trigger.
IF ((SELECT TRIGGER_NESTLEVEL()) > 1) RETURN;
Use a computed column instead. It is almost always a better idea to use a computed column than a trigger.
See Example below of a computed column using the UPPER function:
create table #temp (test varchar (10), test2 AS upper(test))
insert #temp (test)
values ('test')
select * from #temp
And not to sound like a broken record or anything, but this is critically important. Never write a trigger that will not work correctly on multiple record inserts/updates/deletes. This is an extremely poor practice as sooner or later one of these will happen and your trigger will cause data integrity problems asw it won't fail precisely it will only run the process on one of the records. This can go a long time until someone discovers the mess and by themn it is often impossible to correctly fix the data.
It might be safer to exit the trigger when there is nothing to do. Checking the nested level or altering the database by switching off RECURSIVE can be prone to issues.
Ms sql provides a simple way, in a trigger, to see if specific columns have been updated. Use the UPDATE() method to see if certain columns have been updated such as UPDATE(part_description_upper).
IF UPDATE(part_description_upper)
return
Yes, it will recursively call your trigger unless you turn the recursive triggers setting off:
ALTER DATABASE db_name SET RECURSIVE_TRIGGERS OFF
MSDN has a good explanation of the behavior at http://msdn.microsoft.com/en-us/library/aa258254(SQL.80).aspx under the Recursive Triggers heading.
Yea...having an additional step to update a table in which you can set the value in the inital insert is probably an extra, avoidable process.
Do you have access to the original insert statement where you can actually just insert the part_description into the part_description_upper column using UPPER(part_description) value?
After thinking, you probably don't have access as you would have probably done that so should also give some options as well...
1) Depends on the need for this part_description_upper column, if just for "viewing" then can just use the returned part_description value and "ToUpper()" it (depending on programming language).
2) If want to avoid "realtime" processing, can just create a sql job to go through your values once a day during low traffic periods and update that column to the UPPER part_description value for any that are currently not set.
3) go with your trigger (and watch for recursion as others have mentioned)...
HTH
Dave
create or replace
TRIGGER triggername BEFORE INSERT ON
table FOR EACH ROW
BEGIN
/*
Write any select condition if you want to get the data from other tables
*/
:NEW.COLUMNA:= UPPER(COLUMNA);
--:NEW.COUMNa:= NULL;
END;
The above trigger will update the column value before inserting.
For example if we give the value of COLUMNA as null it will update the column as null for each insert statement.

SQL Server "AFTER INSERT" trigger doesn't see the just-inserted row

Consider this trigger:
ALTER TRIGGER myTrigger
ON someTable
AFTER INSERT
AS BEGIN
DELETE FROM someTable
WHERE ISNUMERIC(someField) = 1
END
I've got a table, someTable, and I'm trying to prevent people from inserting bad records. For the purpose of this question, a bad record has a field "someField" that is all numeric.
Of course, the right way to do this is NOT with a trigger, but I don't control the source code... just the SQL database. So I can't really prevent the insertion of the bad row, but I can delete it right away, which is good enough for my needs.
The trigger works, with one problem... when it fires, it never seems to delete the just-inserted bad record... it deletes any OLD bad records, but it doesn't delete the just-inserted bad record. So there's often one bad record floating around that isn't deleted until somebody else comes along and does another INSERT.
Is this a problem in my understanding of triggers? Are newly-inserted rows not yet committed while the trigger is running?
Triggers cannot modify the changed data (Inserted or Deleted) otherwise you could get infinite recursion as the changes invoked the trigger again. One option would be for the trigger to roll back the transaction.
Edit: The reason for this is that the standard for SQL is that inserted and deleted rows cannot be modified by the trigger. The underlying reason for is that the modifications could cause infinite recursion. In the general case, this evaluation could involve multiple triggers in a mutually recursive cascade. Having a system intelligently decide whether to allow such updates is computationally intractable, essentially a variation on the halting problem.
The accepted solution to this is not to permit the trigger to alter the changing data, although it can roll back the transaction.
create table Foo (
FooID int
,SomeField varchar (10)
)
go
create trigger FooInsert
on Foo after insert as
begin
delete inserted
where isnumeric (SomeField) = 1
end
go
Msg 286, Level 16, State 1, Procedure FooInsert, Line 5
The logical tables INSERTED and DELETED cannot be updated.
Something like this will roll back the transaction.
create table Foo (
FooID int
,SomeField varchar (10)
)
go
create trigger FooInsert
on Foo for insert as
if exists (
select 1
from inserted
where isnumeric (SomeField) = 1) begin
rollback transaction
end
go
insert Foo values (1, '1')
Msg 3609, Level 16, State 1, Line 1
The transaction ended in the trigger. The batch has been aborted.
You can reverse the logic. Instead of deleting an invalid row after it has been inserted, write an INSTEAD OF trigger to insert only if you verify the row is valid.
CREATE TRIGGER mytrigger ON sometable
INSTEAD OF INSERT
AS BEGIN
DECLARE #isnum TINYINT;
SELECT #isnum = ISNUMERIC(somefield) FROM inserted;
IF (#isnum = 1)
INSERT INTO sometable SELECT * FROM inserted;
ELSE
RAISERROR('somefield must be numeric', 16, 1)
WITH SETERROR;
END
If your application doesn't want to handle errors (as Joel says is the case in his app), then don't RAISERROR. Just make the trigger silently not do an insert that isn't valid.
I ran this on SQL Server Express 2005 and it works. Note that INSTEAD OF triggers do not cause recursion if you insert into the same table for which the trigger is defined.
Here's my modified version of Bill's code:
CREATE TRIGGER mytrigger ON sometable
INSTEAD OF INSERT
AS BEGIN
INSERT INTO sometable SELECT * FROM inserted WHERE ISNUMERIC(somefield) = 1 FROM inserted;
INSERT INTO sometableRejects SELECT * FROM inserted WHERE ISNUMERIC(somefield) = 0 FROM inserted;
END
This lets the insert always succeed, and any bogus records get thrown into your sometableRejects where you can handle them later. It's important to make your rejects table use nvarchar fields for everything - not ints, tinyints, etc - because if they're getting rejected, it's because the data isn't what you expected it to be.
This also solves the multiple-record insert problem, which will cause Bill's trigger to fail. If you insert ten records simultaneously (like if you do a select-insert-into) and just one of them is bogus, Bill's trigger would have flagged all of them as bad. This handles any number of good and bad records.
I used this trick on a data warehousing project where the inserting application had no idea whether the business logic was any good, and we did the business logic in triggers instead. Truly nasty for performance, but if you can't let the insert fail, it does work.
I think you can use CHECK constraint - it is exactly what it was invented for.
ALTER TABLE someTable
ADD CONSTRAINT someField_check CHECK (ISNUMERIC(someField) = 1) ;
My previous answer (also right by may be a bit overkill):
I think the right way is to use INSTEAD OF trigger to prevent the wrong data from being inserted (rather than deleting it post-factum)
UPDATE: DELETE from a trigger works on both MSSql 7 and MSSql 2008.
I'm no relational guru, nor a SQL standards wonk. However - contrary to the accepted answer - MSSQL deals just fine with both recursive and nested trigger evaluation. I don't know about other RDBMSs.
The relevant options are 'recursive triggers' and 'nested triggers'. Nested triggers are limited to 32 levels, and default to 1. Recursive triggers are off by default, and there's no talk of a limit - but frankly, I've never turned them on, so I don't know what happens with the inevitable stack overflow. I suspect MSSQL would just kill your spid (or there is a recursive limit).
Of course, that just shows that the accepted answer has the wrong reason, not that it's incorrect. However, prior to INSTEAD OF triggers, I recall writing ON INSERT triggers that would merrily UPDATE the just inserted rows. This all worked fine, and as expected.
A quick test of DELETEing the just inserted row also works:
CREATE TABLE Test ( Id int IDENTITY(1,1), Column1 varchar(10) )
GO
CREATE TRIGGER trTest ON Test
FOR INSERT
AS
SET NOCOUNT ON
DELETE FROM Test WHERE Column1 = 'ABCDEF'
GO
INSERT INTO Test (Column1) VALUES ('ABCDEF')
--SCOPE_IDENTITY() should be the same, but doesn't exist in SQL 7
PRINT ##IDENTITY --Will print 1. Run it again, and it'll print 2, 3, etc.
GO
SELECT * FROM Test --No rows
GO
You have something else going on here.
From the CREATE TRIGGER documentation:
deleted and inserted are logical (conceptual) tables. They are
structurally similar to the table on
which the trigger is defined, that is,
the table on which the user action is
attempted, and hold the old values or
new values of the rows that may be
changed by the user action. For
example, to retrieve all values in the
deleted table, use: SELECT * FROM deleted
So that at least gives you a way of seeing the new data.
I can't see anything in the docs which specifies that you won't see the inserted data when querying the normal table though...
I found this reference:
create trigger myTrigger
on SomeTable
for insert
as
if (select count(*)
from SomeTable, inserted
where IsNumeric(SomeField) = 1) <> 0
/* Cancel the insert and print a message.*/
begin
rollback transaction
print "You can't do that!"
end
/* Otherwise, allow it. */
else
print "Added successfully."
I haven't tested it, but logically it looks like it should dp what you're after...rather than deleting the inserted data, prevent the insertion completely, thus not requiring you to have to undo the insert. It should perform better and should therefore ultimately handle a higher load with more ease.
Edit: Of course, there is the potential that if the insert happened inside of an otherwise valid transaction that the wole transaction could be rolled back so you would need to take that scenario into account and determine if the insertion of an invalid data row would constitute a completely invalid transaction...
Is it possible the INSERT is valid, but that a separate UPDATE is done afterwards that is invalid but wouldn't fire the trigger?
The techniques outlined above describe your options pretty well. But what are the users seeing? I can't imagine how a basic conflict like this between you and whoever is responsible for the software can't end up in confusion and antagonism with the users.
I'd do everything I could to find some other way out of the impasse - because other people could easily see any change you make as escalating the problem.
EDIT:
I'll score my first "undelete" and admit to posting the above when this question first appeared. I of course chickened out when I saw that it was from JOEL SPOLSKY. But it looks like it landed somewhere near. Don't need votes, but I'll put it on the record.
IME, triggers are so seldom the right answer for anything other than fine-grained integrity constraints outside the realm of business rules.
MS-SQL has a setting to prevent recursive trigger firing. This is confirgured via the sp_configure stored proceedure, where you can turn recursive or nested triggers on or off.
In this case, it would be possible, if you turn off recursive triggers to link the record from the inserted table via the primary key, and make changes to the record.
In the specific case in the question, it is not really a problem, because the result is to delete the record, which won't refire this particular trigger, but in general that could be a valid approach. We implemented optimistic concurrency this way.
The code for your trigger that could be used in this way would be:
ALTER TRIGGER myTrigger
ON someTable
AFTER INSERT
AS BEGIN
DELETE FROM someTable
INNER JOIN inserted on inserted.primarykey = someTable.primarykey
WHERE ISNUMERIC(inserted.someField) = 1
END
Your "trigger" is doing something that a "trigger" is not suppose to be doing. You can simple have your Sql Server Agent run
DELETE FROM someTable
WHERE ISNUMERIC(someField) = 1
every 1 second or so. While you're at it, how about writing a nice little SP to stop the programming folk from inserting errors into your table. One good thing about SP's is that the parameters are type safe.
I stumbled across this question looking for details on the sequence of events during an insert statement & trigger. I ended up coding some brief tests to confirm how SQL 2016 (EXPRESS) behaves - and thought it would be appropriate to share as it might help others searching for similar information.
Based on my test, it is possible to select data from the "inserted" table and use that to update the inserted data itself. And, of interest to me, the inserted data is not visible to other queries until the trigger completes at which point the final result is visible (at least best as I could test). I didn't test this for recursive triggers, etc. (I would expect the nested trigger would have full visibility of the inserted data in the table, but that's just a guess).
For example - assuming we have the table "table" with an integer field "field" and primary key field "pk" and the following code in our insert trigger:
select #value=field,#pk=pk from inserted
update table set field=#value+1 where pk=#pk
waitfor delay '00:00:15'
We insert a row with the value 1 for "field", then the row will end up with the value 2. Furthermore - if I open another window in SSMS and try:
select * from table where pk = #pk
where #pk is the primary key I originally inserted, the query will be empty until the 15 seconds expire and will then show the updated value (field=2).
I was interested in what data is visible to other queries while the trigger is executing (apparently no new data). I tested with an added delete as well:
select #value=field,#pk=pk from inserted
update table set field=#value+1 where pk=#pk
delete from table where pk=#pk
waitfor delay '00:00:15'
Again, the insert took 15sec to execute. A query executing in a different session showed no new data - during or after execution of the insert + trigger (although I would expect any identity would increment even if no data appears to be inserted).

Resources