Truncate table doesn't release LCK_M_SCH_S - sql-server

I have a stored procedure which truncates a not so large table (2M records but it will get bigger in the future) and then refills it. The sample version is like below:
ALTER PROCEDURE [SC].[A_SP]
AS
BEGIN
BEGIN TRANSACTION;
BEGIN TRY
TRUNCATE TABLE SC.A_TABLE
IF OBJECT_ID('tempdb..#Trans') IS NOT NULL DROP TABLE #Trans
SELECT
*
INTO
#Trans
FROM
(
SELECT
...
FROM
B_TABLE trans (NOLOCK)
INNER JOIN
... (NOLOCK) ON ...
LEFT OUTER JOIN
... (NOLOCK) ON ...
...
) AS x
INSERT INTO
SC.A_TABLE
(
...
)
SELECT
...
FROM
#Trans (NOLOCK)
DROP TABLE #Trans
END TRY
BEGIN CATCH
IF ##TRANCOUNT > 0
ROLLBACK TRANSACTION;
THROW
END CATCH
IF ##TRANCOUNT > 0
COMMIT TRANSACTION;
END
This procedure takes a few hours to work. Sometimes I want to take a COUNT to see how much is finished by using:
SELECT COUNT(*) FROM A_TABLE (NOLOCK)
This doesn't return anything (even with NOLOCK) because there is LCK_M_SCH_S lock on the table because of TRUNCATE statement. I even can't do:
SELECT object_id('SC.A_TABLE')
The other interesting thing is; I sometimes stop the execution of the procedure through SSMS and even after that I can't take a COUNT or select it's object_id. The execution seems suspended in sys.sysprocesses and I have to close the query window to make it release the lock. I suspect it's because I use transactions and leave it in mid state by stopping the execution but I'm not sure.
I know that truncating the table doesn't take so much time since the table doesn't have any foreign keys or indexes.
What can be the problem? I may use DELETE instead of this but I know that TRUNCATE will be much faster here.
EDIT: DELETE instead of TRUNCATE works without any problem btw but I only want to use it as a last resort.

If Truncate ain't your bag and you have way too many rows for a Delete to execute without bring the TLog to a crashing standstill, there's always option UbW (for Ugly, but workable): Create a clone of the table, load the rows into that, then (and inside a transaction), switch everything around.
Option UbW2 builds on that concept - have two tables always built - one empty, one full. Load into the empty table and then modify a View of Synonym to point to that table.
Option LUbW (less ugly...) involves using partitions: Load your data into a switch table then move that as a partition using some flag as your partition function.
All of these require more work and code. We have similar situations and use option UbW2 for our data warehouse that allows us to load millions of rows into 'active' tables with no downtime every hour, nor risking the consumers seeing inconsistent data.

Best you're going to get is to shift some of the heavy lifting out the transaction. That said, Sql Server is working 100% as per design.
ALTER PROCEDURE [SC].[A_SP]
AS
BEGIN
IF OBJECT_ID('tempdb..#Trans') IS NOT NULL DROP TABLE #Trans
SELECT
*
INTO
#Trans
FROM
(
SELECT
...
FROM
B_TABLE trans (NOLOCK)
INNER JOIN
... (NOLOCK) ON ...
LEFT OUTER JOIN
... (NOLOCK) ON ...
...
) AS x
BEGIN TRANSACTION;
BEGIN TRY
TRUNCATE TABLE SC.A_TABLE
INSERT INTO
SC.A_TABLE
(
...
)
SELECT
...
FROM
#Trans (NOLOCK)
DROP TABLE #Trans
If ##TranCount >0 And Xact_State() = 1
Commit Transaction;
END TRY
BEGIN CATCH
IF ##TRANCOUNT > 0
ROLLBACK TRANSACTION;
THROW
END CATCH

Related

Using a if condition in an insert SQL Server

I have the following statement in my code
INSERT INTO #TProductSales (ProductID, StockQTY, ETA1)
VALUES (#ProductID, #StockQTY, #ETA1)
I want to do something like:
IF #ProductID exists THEN
UPDATE #TProductSales
ELSE
INSERT INTO #TProductSales
Is there a way I can do this?
The pattern is (without error handling):
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
BEGIN TRANSACTION;
UPDATE #TProductSales SET StockQty = #StockQty, ETA1 = #ETA1
WHERE ProductID = #ProductID;
IF ##ROWCOUNT = 0
BEGIN
INSERT #TProductSales(ProductID, StockQTY, ETA1)
VALUES(#ProductID, #StockQTY, #ETA1);
END
COMMIT TRANSACTION;
You don't need to perform an additional read of the #temp table here. You're already doing that by trying the update. To protect from race conditions, you do the same as you'd protect any block of two or more statements that you want to isolate: you'd wrap it in a transaction with an appropriate isolation level (likely serializable here, though that all only makes sense when we're not talking about a #temp table, since that is by definition serialized).
You're not any further ahead by adding an IF EXISTS check (and you would need to add locking hints to make that safe / serializable anyway), but you could be further behind, depending on how many times you update existing rows vs. insert new. That could add up to a lot of extra I/O.
People will probably tell you to use MERGE (which is actually multiple operations behind the scenes, and also needs to be protected with serializable), I urge you not to. I and others lay out why here:
Use Caution with SQL Server's MERGE Statement
So, you want to use MERGE, eh?
For a multi-row pattern (like a TVP), I would handle this quite the same way, but there isn't a practical way to avoid the second read like you can with the single-row case. And no, MERGE doesn't avoid it either.
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
BEGIN TRANSACTION;
UPDATE t SET t.col = tvp.col
FROM dbo.TargetTable AS t
INNER JOIN #TVP AS tvp
ON t.ProductID = tvp.ProductID;
INSERT dbo.TargetTable(ProductID, othercols)
SELECT ProductID, othercols
FROM #TVP AS tvp
WHERE NOT EXISTS
(
SELECT 1 FROM dbo.TargetTable
WHERE ProductID = tvp.ProductID
);
COMMIT TRANSACTION;
Well, I guess there is a way to do it, but I haven't tested this thoroughly:
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
BEGIN TRANSACTION;
DECLARE #exist TABLE(ProductID int PRIMARY KEY);
UPDATE t SET t.col = tvp.col
OUTPUT deleted.ProductID INTO #exist
FROM dbo.TargetTable AS t
INNER JOIN #tvp AS tvp
ON t.ProductID = tvp.ProductID;
INSERT dbo.TargetTable(ProductID, othercols)
SELECT ProductID, othercols
FROM #tvp AS t
WHERE NOT EXISTS
(
SELECT 1 FROM #exist
WHERE ProductID = t.ProductID
);
COMMIT TRANSACTION;
In either case, you perform the update first, otherwise you'll update all the rows you just inserted, which would be wasteful.
I personally like to make a table variable or temp table to store the values and then do my update/insert, but I'm normally doing mass insert/updates. That is the nice thing about this pattern is that it works for multiple records without redundancy in the inserts/updates.
DECLARE #Tbl TABLE (
StockQty INT,
ETA1 DATETIME,
ProductID INT
)
INSERT INTO #Tbl (StockQty,ETA1,ProductID)
SELECT #StockQty AS StockQty ,#ETA1 AS ETA1,#ProductID AS ProductID
UPDATE tps
SET StockQty = tmp.StockQty
, tmp.ETA1 = tmp.ETA1
FROM #TProductSales tps
INNER JOIN #Tbl tmp ON tmp.ProductID=tps.ProductID
INSERT INTO #TProductSales(StockQty,ETA1,ProductID)
SELECT
tmp.StockQty,tmp.ETA1,tmp.ProductID
FROM #Tbl tmp
LEFT JOIN #TProductSales tps ON tps.ProductID=tmp.ProductID
WHERE tps.ProductID IS NULL
You could use something like:
IF EXISTS( SELECT NULL FROM #TProductSales WHERE ProductID = #ProductID)
UPDATE #TProductSales SET StockQTY = #StockQTY, ETA1 = #ETA1 WHERE ProductID = #ProductID
ELSE
INSERT INTO #TProductSales(ProductID,StockQTY,ETA1) VALUES(#ProductID,#StockQTY,#ETA1)

Continue with INSERT after error, TSQL

I'm writing some exception/error handling in T-SQL.
The goal is to continue with an insert operation even if one of the inserts causes an error.
The current code looks something like this:
INSERT INTO targettable
SELECT
*, GETDATE()
FROM
table_log l
LEFT JOIN
someothertable m ON l.id = m.id
WHERE
l.actiontype = 'insert'
The problem: sometimes, there is a faulty row/entry in the table table_log. This causes the entire insert operation to rollback. I want the server to continue with the insert operations after the error has occurred.
My ideas: I could use a cursor to handle each insert individually. But as far as I know, that would be horrible in terms of performance. I could also utilizes ignore_dup_key or XACT_ABORT OFF. But I strongly doubt that our DBA will allow that. Plus, I don't think that would be a good solution either.
Here is another idea:
DECLARE #rowcount int
SET #rowcount = (SELECT COUNT(*) FROM table_log l WHERE actiontype = 'insert')
BEGIN TRY
WHILE #rowcount > 0
BEGIN
looppoint:
INSERT INTO target_table
SELECT somecolumn, GETDATE()
FROM table_log l
LEFT JOIN some_other_table m
ON l.id = m.id
WHERE l.actiontype = 'insert' AND
l.id NOT IN
(SELECT id
FROM noinsert_table)
SET #rowcount = #rowcount -1
END
END TRY
BEGIN CATCH
execute sp_some_error_catching_procedure
INSERT INTO noinsert_table
SELECT SCOPE_IDENTITY()
SET #rowcount = #rowcount -1
GOTO looppoint
END CATCH
Basically, I want to catch the error-causing-row and use this information to exclude the row from the insert in the next loop. But I'm not sure if this will work, I don't think SCOPE_IDENTITY will give me the id of a failed row, just successful ones. Plus, this seems overly complex and prone to other problems.
If anyone has some tips, I'd gladly hear about it.
Why don't you clean the data before you insert it or as part of the insert select? That way you are only trying to insert data that you know will fit the parameters of the table you are inserting into. Then you can still use your set-based insert.
By the way, please never write code like that. You should always, always specify the columns in an insert in both the insert part of the statement and the select and never use select *. If someone rearranges the columns in a table, your insert will break or worse, not break but put the data into the wrong columns. If someone adds a column you don't need in the insert to the selected table, your insert will break. This sort of thing is a SQL antipattern.

Select and Delete in the same transaction using TOP clause

I have table in which the data is been continuously added at a rapid pace.
And i need to fetch record from this table and immediately remove them so i cannot process the same record second time. And since the data is been added at a faster rate, i need to use the TOP clause so only small number of records go to business logic for processing at the time.
I am using the below query to
BEGIN TRAN readrowdata
SELECT
top 5 [RawDataId],
[RawData]
FROM
[TABLE] with(HOLDLOCK)
WITH q AS
(
SELECT
top 5 [RawDataId],
[RawData]
FROM
[TABLE] with(HOLDLOCK)
)
DELETE from q
COMMIT TRANSACTION readrowdata
I am using the HOLDLOCK here, so new data cannot insert into the table while i am performing the SELECT and DELETE operation. I used it because Suppose if there are only 3 records in the table now, so the SELECT statement will get 3 records and in the same time new record gets inserted and the DELETE statement will delete 4 records. So i will loose 1 data here.
Is the query is ok in performance term? If i can improve it then please provide me your suggestion.
Thank you
Personally, I'd use a different approach. One with less locking, but also extra information signifying that certain records are currently being processed...
DECLARE #rowsBeingProcessed TABLE (
id INT
);
WITH rows AS (
SELECT top 5 [RawDataId] FROM yourTable WHERE processing_start IS NULL
)
UPDATE rows SET processing_start = getDate() WHERE processing_start IS NULL
OUTPUT INSERTED.RowDataID INTO #rowsBeingProcessed;
-- Business Logic Here
DELETE yourTable WHERE RowDataID IN (SELECT id FROM #rowsBeingProcessed);
Then you can also add checks like "if a record has been 'beingProcessed' for more than 10 minutes, assume that the business logic failed", etc, etc.
By locking the table in this way, you force other processes to wait for your transaction to complete. This can have very rapid consequences on scalability and performance - and it tends to be hard to predict, because there's often a chain of components all relying on your database.
If you have multiple clients each running this query, and multiple clients adding new rows to the table, the overall system performance is likely to deteriorate at some times, as each "read" client is waiting for a lock, the number of "write" clients waiting to insert data grows, and they in turn may tie up other components (whatever is generating the data you want to insert).
Diego's answer is on the money - put the data into a variable, and delete matching rows. Don't use locks in SQL Server if you can possibly avoid it!
You can do it very easily with TRIGGERS. Below mentioned is a kind of situation which will help you need not to hold other users which are trying to insert data simultaneously. Like below...
Data Definition language
CREATE TABLE SampleTable
(
id int
)
Sample Record
insert into SampleTable(id)Values(1)
Sample Trigger
CREATE TRIGGER SampleTableTrigger
on SampleTable AFTER INSERT
AS
IF Exists(SELECT id FROM INSERTED)
BEGIN
Set NOCOUNT ON
SET XACT_ABORT ON
Begin Try
Begin Tran
Select ID From Inserted
DELETE From yourTable WHERE ID IN (SELECT id FROM Inserted);
Commit Tran
End Try
Begin Catch
Rollback Tran
End Catch
End
Hope this is very simple and helpful
If I understand you correctly, you are worried that between your select and your delete, more records would be inserted and the first TOP 5 would be different then the second TOP 5?
If that so, why don't you load your first select into a temp table or variable (or at least the PKs) do whatever you have to do with your data and then do your delete based on this table?
I know that it's old question, but I found some solution here https://www.simple-talk.com/sql/learn-sql-server/the-delete-statement-in-sql-server/:
DECLARE #Output table
(
StaffID INT,
FirstName NVARCHAR(50),
LastName NVARCHAR(50),
CountryRegion NVARCHAR(50)
);
DELETE SalesStaff
OUTPUT DELETED.* INTO #Output
FROM Sales.vSalesPerson sp
INNER JOIN dbo.SalesStaff ss
ON sp.BusinessEntityID = ss.StaffID
WHERE sp.SalesLastYear = 0;
SELECT * FROM #output;
Maybe it will be helpfull for you.

Deletion of rows in table cause LOCKS

I am running the following command to delete rows in batches out of a large table (150 million rows):
DECLARE #RowCount int
WHILE 1=1
BEGIN
DELETE TOP (10000) t1
FROM table t1
INNER JOIN table2 t2 ON t2.PrimaryKey = t1.PrimaryKey
WHERE t1.YearProcessed <= 2007
SET #RowCount = ##ROWCOUNT
IF (#RowCount < 10000) BREAK
END
This table is HIGHLY used. However, it is deleting records, but it is also causing locking on some records, thus throwing errors to the user (which is not acceptable in the environment we're in).
How can I delete older records without causing locks? Should I reduce the size of the batch from 10000 records to 1000? How will this effect log sizes (we have very little hard drive space left for large log growth).
Any suggestions?
I have seen similar sporadic problems in the past where even in small batches 0f 5000 records, locking would still happen. In our case, each delete/update was contained in its own Begin Tran...Commit loop. To correct the problem, the logic of
WaitFor DELAY '00:00:00:01'
was placed at the top of each loop through and that corrected the problem.
First of all - it looks like your DELETE performing Clustered Index Scan, i recommend to do the following:
create index [IX.IndexName] ON t1(YearProcessed, PrimaryKey)
Second - is there any needs to join t2 table?
And then use following query to delete the rows, assuming that your PrimaryKey column is of type INT:
declare #ids TABLE(PrimaryKey INT)
WHILE 1=1
BEGIN
INSERT #ids
SELECT top 10000 DISTINCT t1.PrimaryKey
FROM table t1
INNER JOIN table2 t2 ON t2.PrimaryKey = t1.PrimaryKey
WHERE t1.YearProcessed <= 2007
IF ##ROWCOUNT = 0 BREAK
DELETE t1
WHERE PrimaryKey in (Select PrimaryKey from #ids)
delete from #ids
END
And do not forget to remove t2 table from join if it is not needed
If it still causes locks - then lower the amount of rows deleted in each round
I think you're on the right track.
Look at these two articles, too:
http://support.microsoft.com/kb/323630
http://www.bennadel.com/blog/477-SQL-Server-NOLOCK-ROWLOCK-Directives-To-Improve-Performance.htm
and:
http://www.dbforums.com/microsoft-sql-server/985516-deleting-without-locking.html
Before you run the delete, check the estimated query plan to see if it
is doing an index seek for the delete, or still doing a full table
scan/access.
In addition to the other suggestions (that aim at reducing the work done during deletion) you can also configure SQL Server to not block other readers while doing deletes on a table.
This can be done by using "snapshot isolation" which was introduced with SQL Server 2005:
http://msdn.microsoft.com/en-us/library/ms345124%28v=sql.90%29.aspx
If you have anything with cascading deletes make sure they are indexed.
Highlighting the DELETE query and clicking Display estimated execution plan will show suggested indexes - which in my case included some cascading deletes.
Adding indexes for those made the delete a lot faster - but I still wouldn't try to delete all rows at once.
the best way that I have found is form asp.net DeleteExpiredSessions . you do a READUNCOMMITTED select and put the records in a temp table , than delete the record using a CURSOR.
ALTER PROCEDURE [dbo].[DeleteExpiredSessions]
AS
SET NOCOUNT ON
SET DEADLOCK_PRIORITY LOW
DECLARE #now datetime
SET #now = GETUTCDATE()
CREATE TABLE #tblExpiredSessions
(
SessionID nvarchar(88) NOT NULL PRIMARY KEY
)
INSERT #tblExpiredSessions (SessionID)
SELECT SessionID
FROM [ASPState].dbo.ASPStateTempSessions WITH (READUNCOMMITTED)
WHERE Expires < #now
IF ##ROWCOUNT <> 0
BEGIN
DECLARE ExpiredSessionCursor CURSOR LOCAL FORWARD_ONLY READ_ONLY
FOR SELECT SessionID FROM #tblExpiredSessions
DECLARE #SessionID nvarchar(88)
OPEN ExpiredSessionCursor
FETCH NEXT FROM ExpiredSessionCursor INTO #SessionID
WHILE ##FETCH_STATUS = 0
BEGIN
DELETE FROM [ASPState].dbo.ASPStateTempSessions WHERE SessionID = #SessionID AND Expires < #now
FETCH NEXT FROM ExpiredSessionCursor INTO #SessionID
END
CLOSE ExpiredSessionCursor
DEALLOCATE ExpiredSessionCursor
END
DROP TABLE #tblExpiredSessions
RETURN 0
Try this,
DECLARE #RowCount int
WHILE 1=1
BEGIN
BEGIN TRANSACTION
DELETE TOP (10000) t1
FROM table t1
INNER JOIN table2 t2 ON t2.PrimaryKey = t1.PrimaryKey
WHERE t1.YearProcessed <= 2007
END TRANSACTION
COMMIT TRANSACTION
SET #RowCount = ##ROWCOUNT
IF (#RowCount < 10000) BREAK
END

Instead of trigger in SQL Server loses SCOPE_IDENTITY?

I have a table where I created an INSTEAD OF trigger to enforce some business rules.
The issue is that when I insert data into this table, SCOPE_IDENTITY() returns a NULL value, rather than the actual inserted identity.
Insert + Scope code
INSERT INTO [dbo].[Payment]([DateFrom], [DateTo], [CustomerId], [AdminId])
VALUES ('2009-01-20', '2009-01-31', 6, 1)
SELECT SCOPE_IDENTITY()
Trigger:
CREATE TRIGGER [dbo].[TR_Payments_Insert]
ON [dbo].[Payment]
INSTEAD OF INSERT
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
IF NOT EXISTS(SELECT 1 FROM dbo.Payment p
INNER JOIN Inserted i ON p.CustomerId = i.CustomerId
WHERE (i.DateFrom >= p.DateFrom AND i.DateFrom <= p.DateTo) OR (i.DateTo >= p.DateFrom AND i.DateTo <= p.DateTo)
) AND NOT EXISTS (SELECT 1 FROM Inserted p
INNER JOIN Inserted i ON p.CustomerId = i.CustomerId
WHERE (i.DateFrom <> p.DateFrom AND i.DateTo <> p.DateTo) AND
((i.DateFrom >= p.DateFrom AND i.DateFrom <= p.DateTo) OR (i.DateTo >= p.DateFrom AND i.DateTo <= p.DateTo))
)
BEGIN
INSERT INTO dbo.Payment (DateFrom, DateTo, CustomerId, AdminId)
SELECT DateFrom, DateTo, CustomerId, AdminId
FROM Inserted
END
ELSE
BEGIN
ROLLBACK TRANSACTION
END
END
The code worked before the creation of this trigger. I am using LINQ to SQL in C#. I don't see a way of changing SCOPE_IDENTITY to ##IDENTITY. How do I make this work?
Use ##identity instead of scope_identity().
While scope_identity() returns the last created id in the current scope, ##identity returns the last created id in the current session.
The scope_identity() function is normally recommended over the ##identity field, as you usually don't want triggers to interfer with the id, but in this case you do.
Since you're on SQL 2008, I would highly recommend using the OUTPUT clause instead of one of the custom identity functions. SCOPE_IDENTITY currently has some issues with parallel queries that cause me to recommend against it entirely. ##Identity does not, but it's still not as explicit, and as flexible, as OUTPUT. Plus OUTPUT handles multi-row inserts. Have a look at the BOL article which has some great examples.
I was having serious reservations about using ##identity, because it can return the wrong answer.
But there is a workaround to force ##identity to have the scope_identity() value.
Just for completeness, first I'll list a couple of other workarounds for this problem I've seen on the web:
Make the trigger return a rowset. Then, in a wrapper SP that performs the insert, do INSERT Table1 EXEC sp_ExecuteSQL ... to yet another table. Then scope_identity() will work. This is messy because it requires dynamic SQL which is a pain. Also, be aware that dynamic SQL runs under the permissions of the user calling the SP rather than the permissions of the owner of the SP. If the original client could insert to the table, he should still have that permission, just know that you could run into problems if you deny permission to insert directly to the table.
If there is another candidate key, get the identity of the inserted row(s) using those keys. For example, if Name has a unique index on it, then you can insert, then select the (max for multiple rows) ID from the table you just inserted to using Name. While this may have concurrency problems if another session deletes the row you just inserted, it's no worse than in the original situation if someone deleted your row before the application could use it.
Now, here's how to definitively make your trigger safe for ##Identity to return the correct value, even if your SP or another trigger inserts to an identity-bearing table after the main insert.
Also, please put comments in your code about what you are doing and why so that future visitors to the trigger don't break things or waste time trying to figure it out.
CREATE TRIGGER TR_MyTable_I ON MyTable INSTEAD OF INSERT
AS
SET NOCOUNT ON
DECLARE #MyTableID int
INSERT MyTable (Name, SystemUser)
SELECT I.Name, System_User
FROM Inserted
SET #MyTableID = Scope_Identity()
INSERT AuditTable (SystemUser, Notes)
SELECT SystemUser, 'Added Name ' + I.Name
FROM Inserted
-- The following statement MUST be last in this trigger. It resets ##Identity
-- to be the same as the earlier Scope_Identity() value.
SELECT MyTableID INTO #Trash FROM MyTable WHERE MyTableID = #MyTableID
Normally, the extra insert to the audit table would break everything, because since it has an identity column, then ##Identity will return that value instead of the one from the insertion to MyTable. However, the final select creates a new ##Identity value that is the correct one, based on the Scope_Identity() that we saved from earlier. This also proofs it against any possible additional AFTER trigger on the MyTable table.
Update:
I just noticed that an INSTEAD OF trigger isn't necessary here. This does everything you were looking for:
CREATE TRIGGER dbo.TR_Payments_Insert ON dbo.Payment FOR INSERT
AS
SET NOCOUNT ON;
IF EXISTS (
SELECT *
FROM
Inserted I
INNER JOIN dbo.Payment P ON I.CustomerID = P.CustomerID
WHERE
I.DateFrom < P.DateTo
AND P.DateFrom < I.DateTo
) ROLLBACK TRAN;
This of course allows scope_identity() to keep working. The only drawback is that a rolled-back insert on an identity table does consume the identity values used (the identity value is still incremented by the number of rows in the insert attempt).
I've been staring at this for a few minutes and don't have absolute certainty right now, but I think this preserves the meaning of an inclusive start time and an exclusive end time. If the end time was inclusive (which would be odd to me) then the comparisons would need to use <= instead of <.
Main Problem : Trigger and Entity framework both work in diffrent scope.
The problem is, that if you generate new PK value in trigger, it is different scope. Thus this command returns zero rows and EF will throw exception.
The solution is to add the following SELECT statement at the end of your Trigger:
SELECT * FROM deleted UNION ALL
SELECT * FROM inserted;
in place of * you can mention all the column name including
SELECT IDENT_CURRENT(‘tablename’) AS <IdentityColumnname>
Like araqnid commented, the trigger seems to rollback the transaction when a condition is met. You can do that easier with an AFTER INSTERT trigger:
CREATE TRIGGER [dbo].[TR_Payments_Insert]
ON [dbo].[Payment]
AFTER INSERT
AS
BEGIN
SET NOCOUNT ON;
IF <Condition>
BEGIN
ROLLBACK TRANSACTION
END
END
Then you can use SCOPE_IDENTITY() again, because the INSERT is no longer done in the trigger.
The condition itself seems to let two identical rows past, if they're in the same insert. With the AFTER INSERT trigger, you can rewrite the condition like:
IF EXISTS(
SELECT *
FROM dbo.Payment a
LEFT JOIN dbo.Payment b
ON a.Id <> b.Id
AND a.CustomerId = b.CustomerId
AND (a.DateFrom BETWEEN b.DateFrom AND b.DateTo
OR a.DateTo BETWEEN b.DateFrom AND b.DateTo)
WHERE b.Id is NOT NULL)
And it will catch duplicate rows, because now it can differentiate them based on Id. It also works if you delete a row and replace it with another row in the same statement.
Anyway, if you want my advice, move away from triggers altogether. As you can see even for this example they are very complex. Do the insert through a stored procedure. They are simpler and faster than triggers:
create procedure dbo.InsertPayment
#DateFrom datetime, #DateTo datetime, #CustomerId int, #AdminId int
as
BEGIN TRANSACTION
IF NOT EXISTS (
SELECT *
FROM dbo.Payment
WHERE CustomerId = #CustomerId
AND (#DateFrom BETWEEN DateFrom AND DateTo
OR #DateTo BETWEEN DateFrom AND DateTo))
BEGIN
INSERT into dbo.Payment
(DateFrom, DateTo, CustomerId, AdminId)
VALUES (#DateFrom, #DateTo, #CustomerId, #AdminId)
END
COMMIT TRANSACTION
A little late to the party, but I was looking into this issue myself. A workaround is to create a temp table in the calling procedure where the insert is being performed, insert the scope identity into that temp table from inside the instead of trigger, and then read the identity value out of the temp table once the insertion is complete.
In procedure:
CREATE table #temp ( id int )
... insert statement ...
select id from #temp
-- (you can add sorting and top 1 selection for extra safety)
drop table #temp
In instead of trigger:
-- this check covers you for any inserts that don't want an identity value returned (and therefore don't provide a temp table)
IF OBJECT_ID('tempdb..#temp') is not null
begin
insert into #temp(id)
values
(SCOPE_IDENTITY())
end
You probably want to call it something other than #temp for safety sake (something long and random enough that no one else would be using it: #temp1234235234563785635).

Resources