Ok - so the answer is probably 'everything is within a transaction in SQLServer'. But here's my problem.
I have a stored procedure which returns data based on one input parameter. This is being called from an external service which I believe is Java-based.
I have a log-table to see how long each hit to this proc is taking. Kind of like this (simplified)...
TABLE log (ID INT PRIMARY KEY IDENTITY(1,1), Param VARCHAR(10), In DATETIME, Out DATETIME)
PROC troublesome
(#param VARCHAR(10))
BEGIN
--log the 'in' time and keep the ID
DECLARE #logid INT
INSERT log (In,Param) VALUES (GET_DATE(),#param)
SET #logid = SCOPE_IDENTITY()
SELECT <some stuff from another table based on #param>
--set the 'out' time
UPDATE log SET Out = GET_DATE() WHERE ID = #logid
END
So far so easily-criticised by SQL fascists.
Anyway - When I call this eg troublesome 'SOMEPARAM' - I get the result of the SELECT back and in the log table is a nice new row with the SOMEPARAM and the in and out timestamps.
Now - I watch this table, and even though no rows are going in - if I am to generate a row, I will see that the ID has skipped many places. I guess this is being caused by the external client code hitting it. They are getting the result of the SELECT - but I am not getting their log data.
This suggests to me they are wrapping their client call in a TRAN and rolling it back. Which is one thing that would cause this behaviour. I want to know...
If there is a way I can FORCE the write of the log even if it is contained within a transaction over which I have no control and which is subsequently rolled back (seems unlikely)
If there is a way I can determine from within this proc if it is executing within a transaction (and perhaps raise an error)
If there are other possible explanations for the ID skipping (and it's skipping alot like 1000 places in an hour. So I'm sure it's caused by the client code - as they are reporting successfully retrieving the results of the SELECT also.)
Thanks!
For the why rows are skipped part of the question, your assumption sounds plausible to me.
For how to force the write even within a transaction, I don't know.
As for how to discover whether the SP is running inside a transaction how about the following:
SELECT ##trancount
I tested this out in an sql batch and seems to be ok. Maybe it can take you a bit further.
Related
We have a process that is causing dirty read errors. So, I was thinking of redesigning it as a queue with a single process to go through the queue.
My idea was to create a table that various processes could insert into. Then, one process actually processes the records in the table. But, we need real-time results from the processing method. So, while I was thinking of scheduling it to run every couple of seconds, that may not be fast enough. I don't want a user waiting several seconds for a result.
Essentially, I was thinking of using an infinite loop so that one stored procedure is constantly running, and that stored procedure creates transactions to perform updates.
It could be something like:
WHILE 1=1
BEGIN
--Check for new records
IF NewRecordsExist
BEGIN
--Mark new records as "in process"
BEGIN TRANSACTION
--Process records
--If errors, Rollback
--Otherwise Commit transaction
END
END
But, I don't want SQL Server to get overburdened by this one method. Basically, I want this running in the background all the time, not eating up processor power. Unless there is a lot for it to do. Then, I want it doing its work. Is there a better design pattern for this? Are stored procedures with infinite loops thread-safe? I use this pattern all the time in Windows Processes, but this particular task is not appropriate for a Windows Process.
Update: We have transaction locking set. That is not the issue. We are trying to set aside items that are reserved for orders. So, we have a stored procedure start a transaction, check what is available, and then update the table to mark what is reserved.
The problem is that when two different users attempt to reserve the same product at the same time, the first process checks availability, finds product available, then start to reserve it. But, the second process cannot see what the first process is doing (we have transaction locking set), so it has no idea that another process is trying to reserve the items. It sees the items as still available and also goes to reserve them.
We considered application locking, but we are worried about waits, delays, etc. So, another solution we came up with is one process that handles reservations in a queue. It is first come first serve. Only one process will ever be reading the queue at a time. Different processes can add to the queue, but we no longer need to worry about two processes trying to reserve the same product at the same time. Only one process will be doing the reservations. I was hoping to do this all in SQL, but that may not be possible.
Disclaimer: This may be an option, but the recommendations for using a Service Broker to serialize requests are likely the better solutions.
If you can't use a transaction, but need your your stored procedure to return an immediate result, there are ways to safely update a record in a single statement.
DECLARE #ProductId INT = 123
DECLARE #Quantity INT = 5
UPDATE Inventory
SET Available = Available - #Quantity
WHERE ProductId = #ProductId
AND Available >= #Quantity
IF ##ROW_COUNT > 0
BEGIN
-- Success
END
Under the covers, there is still a transaction occurring accompanied by a lock, but it just covers this one statement.
If you need to update multiple records (reserve multiple product IDs) in one statement, you can use the OUTPUT clause to capture which records were successfully updated, which you can then compare with the original request.
DECLARE #Request TABLE (ProductId INT, Quantity INT)
DECLARE #Result TABLE (ProductId INT, Quantity INT)
INSERT #Request VALUES (123, 5), (456, 1)
UPDATE I
SET Available = Available - R.Quantity
OUTPUT R.ProductId, R.Quantity INTO #Result
FROM #Request R
JOIN Inventory I
ON I.ProductId = R.ProductId
AND I.Available >= R.#Quantity
IF (SELECT COUNT(*) FROM #Request) = (SELECT COUNT(*) FROM #Result)
BEGIN
-- Success
END
ELSE BEGIN
-- Partial or fail. Need to undo those that may have been updated.
END
In any case, you need to thoroughly think through your error handling and undo scenarios.
If you store reservations separate from inventory and define "Available" as "Inventory.OnHand - SUM(Reserved.Quantity)", this approach is not an option.
(Comments as to why this a bad approach will likely follow.)
I think the issue I'm having is a concurrency issue, but I guess I don't understand SQL Server transactions well enough to be sure of what to do.
I have a stored procedure which inserts a new row into one table, then uses the ID created by that INSERT to create a record in another table, which has a foreign key constraint on RID:
INSERT INTO tbl_Registrations ...
DECLARE #RID int;
--get newly-created registration ID
SET #RID=(SELECT MAX(RID) FROM tbl_Registrations WHERE ...);
INSERT INTO tbl_RegistrationCertificates VALUES (#RID, 9);
This stored procedure is called from a user interaction on a website, and every time I've tested it, it works fine. However, every once in a while, I'll get an email from our error handler with the following exception:
System.Data.SqlClient.SqlException (0x80131904): The INSERT statement conflicted with the FOREIGN KEY constraint "FK_tbl_RegistrationCertificates_tbl_Registrations".
The conflict occurred in ... table "dbo.tbl_Registrations", column 'RID'.
The statement has been terminated.
But, either my users are persistent in clicking the "update" button until they see a change, or the transaction is going through even though the message says The statement has been terminated, because, using other information from the email, I can see that both tables have actually been updated.
I know I'm not giving a ton of information here, but the big questions that I can't find answers to are:
Does the semicolon on the end of a statement do anything in T-SQL? I just noticed that there's not a semicolon at the end of the INSERT statement. (I normally put them out of habit, but I'm working on systems that other people built.)
Does this look like a concurrency issue to people who know more about SQL Server than I do?
If it does look like a concurrency issue, can I use transactions to mitigate it, and how do I do that?
EDIT: I just realized the SP isn't what's throwing the error after all. There's another place in the website that's doing an insert without checking to see that the RID exists. Still, I'm glad I asked the questions because I still wouldn't have known the answers otherwise.
T-SQL treats the semicolon as a statement separator, though it wasn't always required. More in-depth discussion on that point can be found in this question.
As for the foreign key error, the stored procedure as you've presented it could use some improvement for a couple of reasons.
First, it's highly likely that you should be using a transaction here. If you don't want a registration record to be added unless a registration certificate record is also added, then the entire operation should be in a single transaction.
Second, SELECTing the MAX value of an identity insert column can easily be a source of concurrency issues if you get simultaneous calls to the stored procedure. Imagine a scenario where thread A INSERTS but can't run its SELECT MAX(RID) before thread B INSERTs. Now both thread A and B will try to INSERT the same value to tbl_RegistrationCertificates.
The SCOPE_IDENTITY() function will help with this by giving you the latest identity value generated from a query within this stored procedure.
Some updated code:
BEGIN TRANSACTION
INSERT INTO tbl_Registrations ...
DECLARE #RID int;
--get newly-created registration ID
SELECT #RID = SCOPE_IDENTITY()
INSERT INTO tbl_RegistrationCertificates VALUES (#RID, 9);
COMMIT TRANSACTION
I'm trying to write to a log file inside a transaction so that the log survives even if the transaction is rolled back.
--start code
begin tran
insert [something] into dbo.logtable
[[main code here]]
rollback
commit
-- end code
You could say just do the log before the transaction starts but that is not as easy because the transaction starts before this S-Proc is run (i.e. the code is part of a bigger transaction)
So, in short, is there a way to write a special statement inside a transaction that is not part of the transaction. I hope my question makes sense.
Use a table variable (#temp) to hold the log info. Table variables survive a transaction rollback.
See this article.
I do this one of two ways, depending on my needs at the time. Both involve using a variable, which retain their value following a rollback.
1) Create a DECLARE #Log varchar(max) value and use this: #SET #Log=ISNULL(#Log+'; ','')+'Your new log info here'. Keep appending to this as you go through the transaction. I'll insert this into the log after the commit or the rollback as necessary. I'll usually only insert the #Log value into the real log table when there is an error (in theCATCH` block) or If I'm trying to debug a problem.
2) create a DECLARE #LogTable table (RowID int identity(1,1) primary key, RowValue varchar(5000). I insert into this as you progress through your transaction. I like using the OUTPUT clause to insert the actual IDs (and other columns with messages, like 'DELETE item 1234') of rows used in the transaction into this table with. I will insert this table into the actual log table after the commit or the rollback as necessary.
If the parent transaction rolls back the logging data will roll back as well - SQL server does not support proper nested transactions. One possibility is to use a CLR stored procedure to do the logging. This can open its own connection to the database outside the transaction and enter and commit the log data.
Log output to a table, use a time delay, and use WITH(NOLOCK) to see it.
It looks like #arvid wanted to debug the operation of the stored procedure, and is able to alter the stored proc.
The c# code starts a transaction, then calls a s-proc, and at the end it commits or rolls back the transaction. I only have easy access to the s-proc
I had a similar situation. So I modified the stored procedure to log my desired output to a table. Then I put a time delay at the end of the stored procedure
WAITFOR DELAY '00:00:12'; -- 12 second delay, adjust as desired
and in another SSMS window, quickly read the table with READ UNCOMMITTED isolation level (the "WITH(NOLOCK)" below
SELECT * FROM dbo.NicksLogTable WITH(NOLOCK);
It's not the solution you want if you need a permanent record of the logs (edit: including where transactions get rolled back), but it suits my purpose to be able to debug the code in a temporary fashion, especially when linked servers, xp_cmdshell, and creating file tables are all disabled :-(
Apologies for bumping a 12-year old thread, but Microsoft deserves an equal caning for not implementing nested transactions or autonomous transactions in that time period.
If you want to emulate nested transaction behaviour you can use named transactions:
begin transaction a
create table #a (i int)
select * from #a
save transaction b
create table #b (i int)
select * from #a
select * from #b
rollback transaction b
select * from #a
rollback transaction a
In SQL Server if you want a ‘sub-transaction’ you should use save transaction xxxx which works like an oracle checkpoint.
Here is my scenario: we have a database, let's call it Logging, with a table that holds records from Log4Net (via MSMQ). The db's recovery mode is set to Simple: we don't care about the transaction logs -- they can roll over.
We have a job that uses data from sp_spaceused to determine if we've met a certain size threshold. If the threshold is exceeded, we determine how many rows need to be deleted to bring the size down to x percent of that threshold. (As an aside, I'm using exec sp_spaceused MyLogTable, TRUE to get the number of rows and a rough approximation of their average size, although I'm not convinced that's the best way to go about it. But that's a different issue.)
I then try to chunk deletes (say, 5000 at a time) by looping a call to a sproc that basically does this:
DELETE TOP (#RowsToDelete) FROM [dbo].[MyLogTable]
until I've deleted what needs to be deleted.
Here's the issue: If I have a lot of rows to delete, the transaction log file fills up. I can watch it grow by running
dbcc sqlperf (logspace)
What puzzles me is that, when the job fails, ALL deleted rows get rolled back. In other words, it appears all the chunks are getting wrapped (somehow) in an implicit transaction.
I've tried expressly setting implicit transactions off, wrapping each DELETE statement in a BEGIN and COMMIT TRAN, but to no avail: either all deleted chunks succeed, or none at all.
I know the simple answer is, Make your log file big enough to handle the largest possible number of records you'd ever delete, but still, why is this being treated as a single transaction?
Sorry if I missed something easy, but I've looked at a lot of posts regarding log file growth, recovery modes, etc., and I can't figure this out.
One other thing: Once the job has failed, the log file stays up at around 95 - 100 percent full for a while before it drops back. However, if I run
checkpoint
dbcc dropcleanbuffers
it drops right back down to about 5 percent utilization.
TIA.
The log file in simple recovery model is truncated automatically every checkpoint gererally speaking. You can invoke checkpoint manually as you do at the end of the loop, but you can also do it every iteration. The frequency of checkpoints is by default determined automatically by sql server based on the recovery interval setting.
As far as the 'all deletes are rolled back', I don't see other explanation but an external transaction. Can you post entire code that cleans up the log? How do you invoke this code?
What is your setting of implicit transactions?
Hm.. if the log grows and doesn't truncate automatically, it may also indicate that there is a transaction running outside of the loop. Can you select ##trancount before your loop and perhaps with each iteration to find out what's going on?
Well, I tried several things, but still all deletes get rolled back. I added printint ##TRANCOUNT both before and after the delete and I get zero as the count. Yet, on failure, all deletes are rolled back .... I added SET IMPLICIT_TRANSACTIONS OFF in several places (including within my initial call from Query Analyzer, but that does not seem to help. This is the body of the stored procedure that is being called (I have set #RowsToDelete to 5000 and 8000):
SET NOCOUNT ON;
print N'##TRANCOUNT PRIOR TO DELETE: ' + CAST(##TRANCOUNT AS VARCHAR(20));
set implicit_transactions off;
WITH RemoveRows AS
(
SELECT ROW_NUMBER() OVER(ORDER BY [Date] ASC) AS RowNum
FROM [dbo].[Log4Net]
)
DELETE FROM RemoveRows
WHERE RowNum < #RowsToDelete + 1
print N'##TRANCOUNT AFTER DELETE: ' + CAST(##TRANCOUNT AS VARCHAR(20));
It is called from this t-sql:
WHILE #RowsDeleted < #RowsToDelete
BEGIN
EXEC [dbo].[DeleteFromLog4Net] #RowsToDelete
SET #RowsDeleted = #RowsDeleted + #RowsToDelete
Set #loops = #loops + 1
print 'Loop: ' + cast(#loops as varchar(10))
END
I have to admit I am puzzled. I am not a DB guru, but I thought I understood enough to figure this out....
Consider this trigger:
ALTER TRIGGER myTrigger
ON someTable
AFTER INSERT
AS BEGIN
DELETE FROM someTable
WHERE ISNUMERIC(someField) = 1
END
I've got a table, someTable, and I'm trying to prevent people from inserting bad records. For the purpose of this question, a bad record has a field "someField" that is all numeric.
Of course, the right way to do this is NOT with a trigger, but I don't control the source code... just the SQL database. So I can't really prevent the insertion of the bad row, but I can delete it right away, which is good enough for my needs.
The trigger works, with one problem... when it fires, it never seems to delete the just-inserted bad record... it deletes any OLD bad records, but it doesn't delete the just-inserted bad record. So there's often one bad record floating around that isn't deleted until somebody else comes along and does another INSERT.
Is this a problem in my understanding of triggers? Are newly-inserted rows not yet committed while the trigger is running?
Triggers cannot modify the changed data (Inserted or Deleted) otherwise you could get infinite recursion as the changes invoked the trigger again. One option would be for the trigger to roll back the transaction.
Edit: The reason for this is that the standard for SQL is that inserted and deleted rows cannot be modified by the trigger. The underlying reason for is that the modifications could cause infinite recursion. In the general case, this evaluation could involve multiple triggers in a mutually recursive cascade. Having a system intelligently decide whether to allow such updates is computationally intractable, essentially a variation on the halting problem.
The accepted solution to this is not to permit the trigger to alter the changing data, although it can roll back the transaction.
create table Foo (
FooID int
,SomeField varchar (10)
)
go
create trigger FooInsert
on Foo after insert as
begin
delete inserted
where isnumeric (SomeField) = 1
end
go
Msg 286, Level 16, State 1, Procedure FooInsert, Line 5
The logical tables INSERTED and DELETED cannot be updated.
Something like this will roll back the transaction.
create table Foo (
FooID int
,SomeField varchar (10)
)
go
create trigger FooInsert
on Foo for insert as
if exists (
select 1
from inserted
where isnumeric (SomeField) = 1) begin
rollback transaction
end
go
insert Foo values (1, '1')
Msg 3609, Level 16, State 1, Line 1
The transaction ended in the trigger. The batch has been aborted.
You can reverse the logic. Instead of deleting an invalid row after it has been inserted, write an INSTEAD OF trigger to insert only if you verify the row is valid.
CREATE TRIGGER mytrigger ON sometable
INSTEAD OF INSERT
AS BEGIN
DECLARE #isnum TINYINT;
SELECT #isnum = ISNUMERIC(somefield) FROM inserted;
IF (#isnum = 1)
INSERT INTO sometable SELECT * FROM inserted;
ELSE
RAISERROR('somefield must be numeric', 16, 1)
WITH SETERROR;
END
If your application doesn't want to handle errors (as Joel says is the case in his app), then don't RAISERROR. Just make the trigger silently not do an insert that isn't valid.
I ran this on SQL Server Express 2005 and it works. Note that INSTEAD OF triggers do not cause recursion if you insert into the same table for which the trigger is defined.
Here's my modified version of Bill's code:
CREATE TRIGGER mytrigger ON sometable
INSTEAD OF INSERT
AS BEGIN
INSERT INTO sometable SELECT * FROM inserted WHERE ISNUMERIC(somefield) = 1 FROM inserted;
INSERT INTO sometableRejects SELECT * FROM inserted WHERE ISNUMERIC(somefield) = 0 FROM inserted;
END
This lets the insert always succeed, and any bogus records get thrown into your sometableRejects where you can handle them later. It's important to make your rejects table use nvarchar fields for everything - not ints, tinyints, etc - because if they're getting rejected, it's because the data isn't what you expected it to be.
This also solves the multiple-record insert problem, which will cause Bill's trigger to fail. If you insert ten records simultaneously (like if you do a select-insert-into) and just one of them is bogus, Bill's trigger would have flagged all of them as bad. This handles any number of good and bad records.
I used this trick on a data warehousing project where the inserting application had no idea whether the business logic was any good, and we did the business logic in triggers instead. Truly nasty for performance, but if you can't let the insert fail, it does work.
I think you can use CHECK constraint - it is exactly what it was invented for.
ALTER TABLE someTable
ADD CONSTRAINT someField_check CHECK (ISNUMERIC(someField) = 1) ;
My previous answer (also right by may be a bit overkill):
I think the right way is to use INSTEAD OF trigger to prevent the wrong data from being inserted (rather than deleting it post-factum)
UPDATE: DELETE from a trigger works on both MSSql 7 and MSSql 2008.
I'm no relational guru, nor a SQL standards wonk. However - contrary to the accepted answer - MSSQL deals just fine with both recursive and nested trigger evaluation. I don't know about other RDBMSs.
The relevant options are 'recursive triggers' and 'nested triggers'. Nested triggers are limited to 32 levels, and default to 1. Recursive triggers are off by default, and there's no talk of a limit - but frankly, I've never turned them on, so I don't know what happens with the inevitable stack overflow. I suspect MSSQL would just kill your spid (or there is a recursive limit).
Of course, that just shows that the accepted answer has the wrong reason, not that it's incorrect. However, prior to INSTEAD OF triggers, I recall writing ON INSERT triggers that would merrily UPDATE the just inserted rows. This all worked fine, and as expected.
A quick test of DELETEing the just inserted row also works:
CREATE TABLE Test ( Id int IDENTITY(1,1), Column1 varchar(10) )
GO
CREATE TRIGGER trTest ON Test
FOR INSERT
AS
SET NOCOUNT ON
DELETE FROM Test WHERE Column1 = 'ABCDEF'
GO
INSERT INTO Test (Column1) VALUES ('ABCDEF')
--SCOPE_IDENTITY() should be the same, but doesn't exist in SQL 7
PRINT ##IDENTITY --Will print 1. Run it again, and it'll print 2, 3, etc.
GO
SELECT * FROM Test --No rows
GO
You have something else going on here.
From the CREATE TRIGGER documentation:
deleted and inserted are logical (conceptual) tables. They are
structurally similar to the table on
which the trigger is defined, that is,
the table on which the user action is
attempted, and hold the old values or
new values of the rows that may be
changed by the user action. For
example, to retrieve all values in the
deleted table, use: SELECT * FROM deleted
So that at least gives you a way of seeing the new data.
I can't see anything in the docs which specifies that you won't see the inserted data when querying the normal table though...
I found this reference:
create trigger myTrigger
on SomeTable
for insert
as
if (select count(*)
from SomeTable, inserted
where IsNumeric(SomeField) = 1) <> 0
/* Cancel the insert and print a message.*/
begin
rollback transaction
print "You can't do that!"
end
/* Otherwise, allow it. */
else
print "Added successfully."
I haven't tested it, but logically it looks like it should dp what you're after...rather than deleting the inserted data, prevent the insertion completely, thus not requiring you to have to undo the insert. It should perform better and should therefore ultimately handle a higher load with more ease.
Edit: Of course, there is the potential that if the insert happened inside of an otherwise valid transaction that the wole transaction could be rolled back so you would need to take that scenario into account and determine if the insertion of an invalid data row would constitute a completely invalid transaction...
Is it possible the INSERT is valid, but that a separate UPDATE is done afterwards that is invalid but wouldn't fire the trigger?
The techniques outlined above describe your options pretty well. But what are the users seeing? I can't imagine how a basic conflict like this between you and whoever is responsible for the software can't end up in confusion and antagonism with the users.
I'd do everything I could to find some other way out of the impasse - because other people could easily see any change you make as escalating the problem.
EDIT:
I'll score my first "undelete" and admit to posting the above when this question first appeared. I of course chickened out when I saw that it was from JOEL SPOLSKY. But it looks like it landed somewhere near. Don't need votes, but I'll put it on the record.
IME, triggers are so seldom the right answer for anything other than fine-grained integrity constraints outside the realm of business rules.
MS-SQL has a setting to prevent recursive trigger firing. This is confirgured via the sp_configure stored proceedure, where you can turn recursive or nested triggers on or off.
In this case, it would be possible, if you turn off recursive triggers to link the record from the inserted table via the primary key, and make changes to the record.
In the specific case in the question, it is not really a problem, because the result is to delete the record, which won't refire this particular trigger, but in general that could be a valid approach. We implemented optimistic concurrency this way.
The code for your trigger that could be used in this way would be:
ALTER TRIGGER myTrigger
ON someTable
AFTER INSERT
AS BEGIN
DELETE FROM someTable
INNER JOIN inserted on inserted.primarykey = someTable.primarykey
WHERE ISNUMERIC(inserted.someField) = 1
END
Your "trigger" is doing something that a "trigger" is not suppose to be doing. You can simple have your Sql Server Agent run
DELETE FROM someTable
WHERE ISNUMERIC(someField) = 1
every 1 second or so. While you're at it, how about writing a nice little SP to stop the programming folk from inserting errors into your table. One good thing about SP's is that the parameters are type safe.
I stumbled across this question looking for details on the sequence of events during an insert statement & trigger. I ended up coding some brief tests to confirm how SQL 2016 (EXPRESS) behaves - and thought it would be appropriate to share as it might help others searching for similar information.
Based on my test, it is possible to select data from the "inserted" table and use that to update the inserted data itself. And, of interest to me, the inserted data is not visible to other queries until the trigger completes at which point the final result is visible (at least best as I could test). I didn't test this for recursive triggers, etc. (I would expect the nested trigger would have full visibility of the inserted data in the table, but that's just a guess).
For example - assuming we have the table "table" with an integer field "field" and primary key field "pk" and the following code in our insert trigger:
select #value=field,#pk=pk from inserted
update table set field=#value+1 where pk=#pk
waitfor delay '00:00:15'
We insert a row with the value 1 for "field", then the row will end up with the value 2. Furthermore - if I open another window in SSMS and try:
select * from table where pk = #pk
where #pk is the primary key I originally inserted, the query will be empty until the 15 seconds expire and will then show the updated value (field=2).
I was interested in what data is visible to other queries while the trigger is executing (apparently no new data). I tested with an added delete as well:
select #value=field,#pk=pk from inserted
update table set field=#value+1 where pk=#pk
delete from table where pk=#pk
waitfor delay '00:00:15'
Again, the insert took 15sec to execute. A query executing in a different session showed no new data - during or after execution of the insert + trigger (although I would expect any identity would increment even if no data appears to be inserted).