How can I make sure following statements don't have a race condition?
IF NOT EXISTS (select col1 from Table1 where SomeId=#SomeId)
INSERT INTO Table1 values (#SomeId,...)
IF NOT EXISTS (select col1 from Table2 where SomeId=#SomeId)
INSERT INTO Table2 values (#SomeId,...)
Is this enough
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
BEGIN TRAN
IF NOT EXISTS (SELECT col1 FROM Table1 WITH (UPDLOCK) WHERE SomeId=#SomeId)
INSERT INTO Table1 VALUES (#SomeId,...)
COMMIT TRAN
BEGIN TRAN
IF NOT EXISTS (SELECT col1 FROM Table2 WITH (UPDLOCK) WHERE SomeId=#SomeId)
INSERT INTO Table2 VALUES (#SomeId,...)
COMMIT TRAN
Yes. That is enough. Setting the transaction isolation level to serializable will create key locks that cover SomeId=#SomeId when you run your select-- which will prevent other processes from inserting values with the same key (SomeId=#SomeId) while your transaction is running.
The WITH(UPDLOCK) hint will cause the SELECT to obtain an update lock on the selected row(s), if they exist. This will prevent other transactions from modifying these rows (if they existed at the time of the select) while your transaction is running.
It doesn't look like you really need the WITH(UPDLOCK) hint, since you are committing the transaction right away if the record already exists. If you wanted to do something else before committing if the record does exist, you might need this hint-- but as it is, it appears you do not.
A statement is a transaction
declare #v int = 11;
insert into iden (val)
select #v
where not exists (select 1 from iden with (UPDLOCK) where val = #v)
Related
I have the following statement in my code
INSERT INTO #TProductSales (ProductID, StockQTY, ETA1)
VALUES (#ProductID, #StockQTY, #ETA1)
I want to do something like:
IF #ProductID exists THEN
UPDATE #TProductSales
ELSE
INSERT INTO #TProductSales
Is there a way I can do this?
The pattern is (without error handling):
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
BEGIN TRANSACTION;
UPDATE #TProductSales SET StockQty = #StockQty, ETA1 = #ETA1
WHERE ProductID = #ProductID;
IF ##ROWCOUNT = 0
BEGIN
INSERT #TProductSales(ProductID, StockQTY, ETA1)
VALUES(#ProductID, #StockQTY, #ETA1);
END
COMMIT TRANSACTION;
You don't need to perform an additional read of the #temp table here. You're already doing that by trying the update. To protect from race conditions, you do the same as you'd protect any block of two or more statements that you want to isolate: you'd wrap it in a transaction with an appropriate isolation level (likely serializable here, though that all only makes sense when we're not talking about a #temp table, since that is by definition serialized).
You're not any further ahead by adding an IF EXISTS check (and you would need to add locking hints to make that safe / serializable anyway), but you could be further behind, depending on how many times you update existing rows vs. insert new. That could add up to a lot of extra I/O.
People will probably tell you to use MERGE (which is actually multiple operations behind the scenes, and also needs to be protected with serializable), I urge you not to. I and others lay out why here:
Use Caution with SQL Server's MERGE Statement
So, you want to use MERGE, eh?
For a multi-row pattern (like a TVP), I would handle this quite the same way, but there isn't a practical way to avoid the second read like you can with the single-row case. And no, MERGE doesn't avoid it either.
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
BEGIN TRANSACTION;
UPDATE t SET t.col = tvp.col
FROM dbo.TargetTable AS t
INNER JOIN #TVP AS tvp
ON t.ProductID = tvp.ProductID;
INSERT dbo.TargetTable(ProductID, othercols)
SELECT ProductID, othercols
FROM #TVP AS tvp
WHERE NOT EXISTS
(
SELECT 1 FROM dbo.TargetTable
WHERE ProductID = tvp.ProductID
);
COMMIT TRANSACTION;
Well, I guess there is a way to do it, but I haven't tested this thoroughly:
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
BEGIN TRANSACTION;
DECLARE #exist TABLE(ProductID int PRIMARY KEY);
UPDATE t SET t.col = tvp.col
OUTPUT deleted.ProductID INTO #exist
FROM dbo.TargetTable AS t
INNER JOIN #tvp AS tvp
ON t.ProductID = tvp.ProductID;
INSERT dbo.TargetTable(ProductID, othercols)
SELECT ProductID, othercols
FROM #tvp AS t
WHERE NOT EXISTS
(
SELECT 1 FROM #exist
WHERE ProductID = t.ProductID
);
COMMIT TRANSACTION;
In either case, you perform the update first, otherwise you'll update all the rows you just inserted, which would be wasteful.
I personally like to make a table variable or temp table to store the values and then do my update/insert, but I'm normally doing mass insert/updates. That is the nice thing about this pattern is that it works for multiple records without redundancy in the inserts/updates.
DECLARE #Tbl TABLE (
StockQty INT,
ETA1 DATETIME,
ProductID INT
)
INSERT INTO #Tbl (StockQty,ETA1,ProductID)
SELECT #StockQty AS StockQty ,#ETA1 AS ETA1,#ProductID AS ProductID
UPDATE tps
SET StockQty = tmp.StockQty
, tmp.ETA1 = tmp.ETA1
FROM #TProductSales tps
INNER JOIN #Tbl tmp ON tmp.ProductID=tps.ProductID
INSERT INTO #TProductSales(StockQty,ETA1,ProductID)
SELECT
tmp.StockQty,tmp.ETA1,tmp.ProductID
FROM #Tbl tmp
LEFT JOIN #TProductSales tps ON tps.ProductID=tmp.ProductID
WHERE tps.ProductID IS NULL
You could use something like:
IF EXISTS( SELECT NULL FROM #TProductSales WHERE ProductID = #ProductID)
UPDATE #TProductSales SET StockQTY = #StockQTY, ETA1 = #ETA1 WHERE ProductID = #ProductID
ELSE
INSERT INTO #TProductSales(ProductID,StockQTY,ETA1) VALUES(#ProductID,#StockQTY,#ETA1)
I have written a complex stored procedure that moves records between several tables based on complex logic. I have checked each little piece of the logic carefully but I want to have maximal confidence in the code. Is there a way to declare a logical relationship that must hold between the beginning and end of the SP? If these conditions are not met, I imagine that the SP will be rolled back.
Specifically, I want to declare that records in table A that are picked out by a set of logical conditions will be in table B (and not in table A) at the end of the SP and that records not picked out by those set of logical conditions will still be in table X at the end of the SP. I use a boolean function to pick out the records in A that match the conditions.
I know that I can test some of these things with nUnit but I am wondering if there is a way to declare and enforce this sort of logic within t-SQL itself.
If your records have a field containing a unique ID you can do a simple INSERT + SELECT from TableA into TableB and then a simple DELETE of the inserted original records.
For instance first insert all the records matching your selection criteria into TableB:
INSERT INTO TableB (uniqueID, Field1, Field2, FieldN)
SELECT uniqueID,
Field1,
Field2,
FieldN
FROM TableA
WHERE FieldN = SomeCriteria
And then delete from TableA all the records you just inserted into tableB using the field uniqueID as the selction criteria to determine which records to delete:
DELETE TableA
WHERE uniqueID IN (SELECT uniqueID
FROM TableB)
If you put both statements in a single transaction with a couple error checks you should be protected in case something goes wrong while the two statements are executing:
BEGIN TRANSACTION
INSERT INTO TableB (uniqueID, Field1, Field2, FieldN)
SELECT uniqueID,
Field1,
Field2,
FieldN
FROM TableA
WHERE FieldN = SomeCriteria;
IF ##ERROR <> 0 THEN
BEGIN
ROLLBACK TRANSACTION
RETURN (##ERROR)
END
DELETE TableA
WHERE uniqueID IN (SELECT uniqueID
FROM TableB);
if ##ERROR <> 0 THEN
BEGIN
ROLLBACK TRANSACTION
RETURN (##ERROR)
END
COMMIT TRANSACTION
If you do not have a single column which can unique identify each of your records you can use EXISTS instead of IN for selecting the records to DELETE from TableA:
DELETE TableA
WHERE EXISTS (SELECT *
FROM TableB
WHERE TableA.field1 = TableB.field1
AND TableA.field2 = TableB.field2
AND TableA.FieldN = TableB.fieldn);
I am running the following command to delete rows in batches out of a large table (150 million rows):
DECLARE #RowCount int
WHILE 1=1
BEGIN
DELETE TOP (10000) t1
FROM table t1
INNER JOIN table2 t2 ON t2.PrimaryKey = t1.PrimaryKey
WHERE t1.YearProcessed <= 2007
SET #RowCount = ##ROWCOUNT
IF (#RowCount < 10000) BREAK
END
This table is HIGHLY used. However, it is deleting records, but it is also causing locking on some records, thus throwing errors to the user (which is not acceptable in the environment we're in).
How can I delete older records without causing locks? Should I reduce the size of the batch from 10000 records to 1000? How will this effect log sizes (we have very little hard drive space left for large log growth).
Any suggestions?
I have seen similar sporadic problems in the past where even in small batches 0f 5000 records, locking would still happen. In our case, each delete/update was contained in its own Begin Tran...Commit loop. To correct the problem, the logic of
WaitFor DELAY '00:00:00:01'
was placed at the top of each loop through and that corrected the problem.
First of all - it looks like your DELETE performing Clustered Index Scan, i recommend to do the following:
create index [IX.IndexName] ON t1(YearProcessed, PrimaryKey)
Second - is there any needs to join t2 table?
And then use following query to delete the rows, assuming that your PrimaryKey column is of type INT:
declare #ids TABLE(PrimaryKey INT)
WHILE 1=1
BEGIN
INSERT #ids
SELECT top 10000 DISTINCT t1.PrimaryKey
FROM table t1
INNER JOIN table2 t2 ON t2.PrimaryKey = t1.PrimaryKey
WHERE t1.YearProcessed <= 2007
IF ##ROWCOUNT = 0 BREAK
DELETE t1
WHERE PrimaryKey in (Select PrimaryKey from #ids)
delete from #ids
END
And do not forget to remove t2 table from join if it is not needed
If it still causes locks - then lower the amount of rows deleted in each round
I think you're on the right track.
Look at these two articles, too:
http://support.microsoft.com/kb/323630
http://www.bennadel.com/blog/477-SQL-Server-NOLOCK-ROWLOCK-Directives-To-Improve-Performance.htm
and:
http://www.dbforums.com/microsoft-sql-server/985516-deleting-without-locking.html
Before you run the delete, check the estimated query plan to see if it
is doing an index seek for the delete, or still doing a full table
scan/access.
In addition to the other suggestions (that aim at reducing the work done during deletion) you can also configure SQL Server to not block other readers while doing deletes on a table.
This can be done by using "snapshot isolation" which was introduced with SQL Server 2005:
http://msdn.microsoft.com/en-us/library/ms345124%28v=sql.90%29.aspx
If you have anything with cascading deletes make sure they are indexed.
Highlighting the DELETE query and clicking Display estimated execution plan will show suggested indexes - which in my case included some cascading deletes.
Adding indexes for those made the delete a lot faster - but I still wouldn't try to delete all rows at once.
the best way that I have found is form asp.net DeleteExpiredSessions . you do a READUNCOMMITTED select and put the records in a temp table , than delete the record using a CURSOR.
ALTER PROCEDURE [dbo].[DeleteExpiredSessions]
AS
SET NOCOUNT ON
SET DEADLOCK_PRIORITY LOW
DECLARE #now datetime
SET #now = GETUTCDATE()
CREATE TABLE #tblExpiredSessions
(
SessionID nvarchar(88) NOT NULL PRIMARY KEY
)
INSERT #tblExpiredSessions (SessionID)
SELECT SessionID
FROM [ASPState].dbo.ASPStateTempSessions WITH (READUNCOMMITTED)
WHERE Expires < #now
IF ##ROWCOUNT <> 0
BEGIN
DECLARE ExpiredSessionCursor CURSOR LOCAL FORWARD_ONLY READ_ONLY
FOR SELECT SessionID FROM #tblExpiredSessions
DECLARE #SessionID nvarchar(88)
OPEN ExpiredSessionCursor
FETCH NEXT FROM ExpiredSessionCursor INTO #SessionID
WHILE ##FETCH_STATUS = 0
BEGIN
DELETE FROM [ASPState].dbo.ASPStateTempSessions WHERE SessionID = #SessionID AND Expires < #now
FETCH NEXT FROM ExpiredSessionCursor INTO #SessionID
END
CLOSE ExpiredSessionCursor
DEALLOCATE ExpiredSessionCursor
END
DROP TABLE #tblExpiredSessions
RETURN 0
Try this,
DECLARE #RowCount int
WHILE 1=1
BEGIN
BEGIN TRANSACTION
DELETE TOP (10000) t1
FROM table t1
INNER JOIN table2 t2 ON t2.PrimaryKey = t1.PrimaryKey
WHERE t1.YearProcessed <= 2007
END TRANSACTION
COMMIT TRANSACTION
SET #RowCount = ##ROWCOUNT
IF (#RowCount < 10000) BREAK
END
First off, I want to start by saying I am not an SQL programmer (I'm a C++/Delphi guy), so some of my questions might be really obvious. So pardon my ignorance :o)
I've been charged with writing a script that will update certain tables in a database based on the contents of a CSV file. I have it working it would seem, but I am worried about atomicity for one of the steps:
One of the tables contains only one field - an int which must be incremented each time, but from what I can see is not defined as an identity for some reason. I must create a new row in this table, and insert that row's value into another newly-created row in another table.
This is how I did it (as part of a larger script):
DECLARE #uniqueID INT,
#counter INT,
#maxCount INT
SELECT #maxCount = COUNT(*) FROM tempTable
SET #counter = 1
WHILE (#counter <= #maxCount)
BEGIN
SELECT #uniqueID = MAX(id) FROM uniqueIDTable <----Line 1
INSERT INTO uniqueIDTableVALUES (#uniqueID + 1) <----Line 2
SELECT #uniqueID = #uniqueID + 1
UPDATE TOP(1) tempTable
SET userID = #uniqueID
WHERE userID IS NULL
SET #counter = #counter + 1
END
GO
First of all, am I correct using a "WHILE" construct? I couldn't find a way to achieve this with a simple UPDATE statement.
Second of all, how can I be sure that no other operation will be carried out on the database between Lines 1 and 2 that would insert a value into the uniqueIDTable before I do? Is there a way to "synchronize" operations in SQL Server Express?
Also, keep in mind that I have no control over the database design.
Thanks a lot!
You can do the whole 9 yards in one single statement:
WITH cteUsers AS (
SELECT t.*
, ROW_NUMBER() OVER (ORDER BY userID) as rn
, COALESCE(m.id,0) as max_id
FROM tempTable t WITH(UPDLOCK)
JOIN (
SELECT MAX(id) as id
FROM uniqueIDTable WITH (UPDLOCK)
) as m ON 1=1
WHERE userID IS NULL)
UPDATE cteUsers
SET userID = rn + max_id
OUTPUT INSERTED.userID
INTO uniqueIDTable (id);
You get the MAX(id), lock the uniqueIDTable, compute sequential userIDs for users with NULL userID by using ROW_NUMBER(), update the tempTable and insert the new ids into uniqueIDTable. All in one operation.
For performance you need and index on uniqueIDTable(id) and index on tempTable(userID).
SQL is all about set oriented operations, WHILE loops are the code smell of SQL.
You need a transaction to ensure atomicity and you need to move the select and insert into one statement or do the select with an updlock to prevent two people from running the select at the same time, getting the same value and then trying to insert the same value into the table.
Basically
DECLARE #MaxValTable TABLE (MaxID int)
BEGIN TRANSACTION
BEGIN TRY
INSERT INTO uniqueIDTable VALUES (id)
OUTPUT inserted.id INTO #MaxValTable
SELECT MAX(id) + 1 FROM uniqueIDTable
UPDATE TOP(1) tempTable
SET userID = (SELECT MAXid FROM #MaxValTable)
WHERE userID IS NULL
COMMIT TRANSACTION
END TRY
BEGIN CATCH
ROLLBACK TRANSACTION
RAISERROR 'Error occurred updating tempTable' -- more detail here is good
END CATCH
That said, using an identity would make things far simpler. This is a potential concurrency problem. Is there any way you can change the column to be identity?
Edit: Ensuring that only one connection at a time will be able to insert into the uniqueIDtable. Not going to scale well though.
Edit: Table variable's better than exclusive table lock. If need be, this can be used when inserting users as well.
I have a table where I created an INSTEAD OF trigger to enforce some business rules.
The issue is that when I insert data into this table, SCOPE_IDENTITY() returns a NULL value, rather than the actual inserted identity.
Insert + Scope code
INSERT INTO [dbo].[Payment]([DateFrom], [DateTo], [CustomerId], [AdminId])
VALUES ('2009-01-20', '2009-01-31', 6, 1)
SELECT SCOPE_IDENTITY()
Trigger:
CREATE TRIGGER [dbo].[TR_Payments_Insert]
ON [dbo].[Payment]
INSTEAD OF INSERT
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
IF NOT EXISTS(SELECT 1 FROM dbo.Payment p
INNER JOIN Inserted i ON p.CustomerId = i.CustomerId
WHERE (i.DateFrom >= p.DateFrom AND i.DateFrom <= p.DateTo) OR (i.DateTo >= p.DateFrom AND i.DateTo <= p.DateTo)
) AND NOT EXISTS (SELECT 1 FROM Inserted p
INNER JOIN Inserted i ON p.CustomerId = i.CustomerId
WHERE (i.DateFrom <> p.DateFrom AND i.DateTo <> p.DateTo) AND
((i.DateFrom >= p.DateFrom AND i.DateFrom <= p.DateTo) OR (i.DateTo >= p.DateFrom AND i.DateTo <= p.DateTo))
)
BEGIN
INSERT INTO dbo.Payment (DateFrom, DateTo, CustomerId, AdminId)
SELECT DateFrom, DateTo, CustomerId, AdminId
FROM Inserted
END
ELSE
BEGIN
ROLLBACK TRANSACTION
END
END
The code worked before the creation of this trigger. I am using LINQ to SQL in C#. I don't see a way of changing SCOPE_IDENTITY to ##IDENTITY. How do I make this work?
Use ##identity instead of scope_identity().
While scope_identity() returns the last created id in the current scope, ##identity returns the last created id in the current session.
The scope_identity() function is normally recommended over the ##identity field, as you usually don't want triggers to interfer with the id, but in this case you do.
Since you're on SQL 2008, I would highly recommend using the OUTPUT clause instead of one of the custom identity functions. SCOPE_IDENTITY currently has some issues with parallel queries that cause me to recommend against it entirely. ##Identity does not, but it's still not as explicit, and as flexible, as OUTPUT. Plus OUTPUT handles multi-row inserts. Have a look at the BOL article which has some great examples.
I was having serious reservations about using ##identity, because it can return the wrong answer.
But there is a workaround to force ##identity to have the scope_identity() value.
Just for completeness, first I'll list a couple of other workarounds for this problem I've seen on the web:
Make the trigger return a rowset. Then, in a wrapper SP that performs the insert, do INSERT Table1 EXEC sp_ExecuteSQL ... to yet another table. Then scope_identity() will work. This is messy because it requires dynamic SQL which is a pain. Also, be aware that dynamic SQL runs under the permissions of the user calling the SP rather than the permissions of the owner of the SP. If the original client could insert to the table, he should still have that permission, just know that you could run into problems if you deny permission to insert directly to the table.
If there is another candidate key, get the identity of the inserted row(s) using those keys. For example, if Name has a unique index on it, then you can insert, then select the (max for multiple rows) ID from the table you just inserted to using Name. While this may have concurrency problems if another session deletes the row you just inserted, it's no worse than in the original situation if someone deleted your row before the application could use it.
Now, here's how to definitively make your trigger safe for ##Identity to return the correct value, even if your SP or another trigger inserts to an identity-bearing table after the main insert.
Also, please put comments in your code about what you are doing and why so that future visitors to the trigger don't break things or waste time trying to figure it out.
CREATE TRIGGER TR_MyTable_I ON MyTable INSTEAD OF INSERT
AS
SET NOCOUNT ON
DECLARE #MyTableID int
INSERT MyTable (Name, SystemUser)
SELECT I.Name, System_User
FROM Inserted
SET #MyTableID = Scope_Identity()
INSERT AuditTable (SystemUser, Notes)
SELECT SystemUser, 'Added Name ' + I.Name
FROM Inserted
-- The following statement MUST be last in this trigger. It resets ##Identity
-- to be the same as the earlier Scope_Identity() value.
SELECT MyTableID INTO #Trash FROM MyTable WHERE MyTableID = #MyTableID
Normally, the extra insert to the audit table would break everything, because since it has an identity column, then ##Identity will return that value instead of the one from the insertion to MyTable. However, the final select creates a new ##Identity value that is the correct one, based on the Scope_Identity() that we saved from earlier. This also proofs it against any possible additional AFTER trigger on the MyTable table.
Update:
I just noticed that an INSTEAD OF trigger isn't necessary here. This does everything you were looking for:
CREATE TRIGGER dbo.TR_Payments_Insert ON dbo.Payment FOR INSERT
AS
SET NOCOUNT ON;
IF EXISTS (
SELECT *
FROM
Inserted I
INNER JOIN dbo.Payment P ON I.CustomerID = P.CustomerID
WHERE
I.DateFrom < P.DateTo
AND P.DateFrom < I.DateTo
) ROLLBACK TRAN;
This of course allows scope_identity() to keep working. The only drawback is that a rolled-back insert on an identity table does consume the identity values used (the identity value is still incremented by the number of rows in the insert attempt).
I've been staring at this for a few minutes and don't have absolute certainty right now, but I think this preserves the meaning of an inclusive start time and an exclusive end time. If the end time was inclusive (which would be odd to me) then the comparisons would need to use <= instead of <.
Main Problem : Trigger and Entity framework both work in diffrent scope.
The problem is, that if you generate new PK value in trigger, it is different scope. Thus this command returns zero rows and EF will throw exception.
The solution is to add the following SELECT statement at the end of your Trigger:
SELECT * FROM deleted UNION ALL
SELECT * FROM inserted;
in place of * you can mention all the column name including
SELECT IDENT_CURRENT(‘tablename’) AS <IdentityColumnname>
Like araqnid commented, the trigger seems to rollback the transaction when a condition is met. You can do that easier with an AFTER INSTERT trigger:
CREATE TRIGGER [dbo].[TR_Payments_Insert]
ON [dbo].[Payment]
AFTER INSERT
AS
BEGIN
SET NOCOUNT ON;
IF <Condition>
BEGIN
ROLLBACK TRANSACTION
END
END
Then you can use SCOPE_IDENTITY() again, because the INSERT is no longer done in the trigger.
The condition itself seems to let two identical rows past, if they're in the same insert. With the AFTER INSERT trigger, you can rewrite the condition like:
IF EXISTS(
SELECT *
FROM dbo.Payment a
LEFT JOIN dbo.Payment b
ON a.Id <> b.Id
AND a.CustomerId = b.CustomerId
AND (a.DateFrom BETWEEN b.DateFrom AND b.DateTo
OR a.DateTo BETWEEN b.DateFrom AND b.DateTo)
WHERE b.Id is NOT NULL)
And it will catch duplicate rows, because now it can differentiate them based on Id. It also works if you delete a row and replace it with another row in the same statement.
Anyway, if you want my advice, move away from triggers altogether. As you can see even for this example they are very complex. Do the insert through a stored procedure. They are simpler and faster than triggers:
create procedure dbo.InsertPayment
#DateFrom datetime, #DateTo datetime, #CustomerId int, #AdminId int
as
BEGIN TRANSACTION
IF NOT EXISTS (
SELECT *
FROM dbo.Payment
WHERE CustomerId = #CustomerId
AND (#DateFrom BETWEEN DateFrom AND DateTo
OR #DateTo BETWEEN DateFrom AND DateTo))
BEGIN
INSERT into dbo.Payment
(DateFrom, DateTo, CustomerId, AdminId)
VALUES (#DateFrom, #DateTo, #CustomerId, #AdminId)
END
COMMIT TRANSACTION
A little late to the party, but I was looking into this issue myself. A workaround is to create a temp table in the calling procedure where the insert is being performed, insert the scope identity into that temp table from inside the instead of trigger, and then read the identity value out of the temp table once the insertion is complete.
In procedure:
CREATE table #temp ( id int )
... insert statement ...
select id from #temp
-- (you can add sorting and top 1 selection for extra safety)
drop table #temp
In instead of trigger:
-- this check covers you for any inserts that don't want an identity value returned (and therefore don't provide a temp table)
IF OBJECT_ID('tempdb..#temp') is not null
begin
insert into #temp(id)
values
(SCOPE_IDENTITY())
end
You probably want to call it something other than #temp for safety sake (something long and random enough that no one else would be using it: #temp1234235234563785635).