Transaction with Read Committed Isolation Level and Table Constraints - sql-server

Does table constraints execute in the same transaction?
I have a transaction with Read Committed isolation level which inserts some rows in a table. The table has a constraint on it that calls a function which in turn selects some rows from the same table.
It looks like the function runs without knowing anything about the transaction and the select in the function returns rows in the table which were there prior to the transaction.
Is there a workaround or am I missing anything? Thanks.
Here are the codes for the transaction and the constraint:
insert into Treasury.DariaftPardakhtDarkhastFaktor
(DarkhastFaktor, DariaftPardakht, Mablagh, CodeVazeiat,
ZamaneTakhsiseFaktor, MarkazPakhsh, ShomarehFaktor, User)
values
(#DarkhastFaktor, #DariaftPardakht, #Mablagh, #CodeVazeiat,
#ZamaneTakhsiseFaktor, #MarkazPakhsh, #ShomarehFaktor, #User);
constraint expression (enforce for inserts and updates):
([Treasury].[ufnCheckDarkhastFaktorMablaghConstraint]([DarkhastFaktor])=(1))
ufnCheckDarkhastFaktorMablaghConstraint:
returns bit
as
begin
declare #SumMablagh float
declare #Mablagh float
select #SumMablagh = isnull(sum(Mablagh), 0)
from Treasury.DariaftPardakhtDarkhastFaktor
where DarkhastFaktor= #DarkhastFaktor
select #Mablagh = isnull(MablaghKhalesFaktor, 0)
from Sales.DarkhastFaktor
where DarkhastFaktor= #DarkhastFaktor
if #Mablagh - #SumMablagh < -1
return 0
return 1
end

Check constraints are not enforced for delete operations, see http://msdn.microsoft.com/en-us/library/ms188258.aspx
CHECK constraints are not validated
during DELETE statements. Therefore,
executing DELETE statements on tables
with certain types of check
constraints may produce unexpected
results.
Edit - to answer your question on workaround, you can use a delete trigger to roll back if your function call shows an invariant is broken.
Edit #2 - #reticent, if you are adding rows then the function called by the check constraint should in fact see the rows. If it didn't, check constraints would be useless. Here is a simple example, you will find that the first 2 inserts succeed and the third fails as expected:
create table t1 (id int)
go
create function t1_validateSingleton ()
returns bit
as
begin
declare #ret bit
set #ret = 1
if exists (
select count(*)
from t1
group by id
having count(*) > 1
)
begin
set #ret = 0
end
return (#ret)
end
go
alter table t1
add constraint t1_singleton
check (dbo.t1_validateSingleton()=1)
go
insert t1 values (1)
insert t1 values (2)
insert t1 values (1)

Related

How to prevent insertion of cyclic reference in SQL

I have the following table:
create table dbo.Link
(
FromNodeId int not null,
ToNodeId int not null
)
Rows in this table represent links between nodes.
I want to prevent inserts or updates to this table from creating a cyclic relationship between nodes.
So if the table contains:
(1,2)
(2,3)
it should not be allowed to contain any of the following:
(1,1)
(2,1)
(3,1)
I'm happy to treat (1,1) separately (e.g. using a CHECK CONSTRAINT) if it makes the solution more straightforward.
I was thinking of creating an AFTER INSERT trigger with a recursive CTE (though there may be an easier way to do it).
Assuming this is the way to go, what would the trigger definition be? If there is a more elegant way, what is it?
Note first that it is preferable to detect cycles in another environment as recursive CTEs aren't known for their good performance and neither is a trigger that would run for each insert statement. For large graphs, a solution based on the solution below will likely be inefficient.
Suppose you create the table as follows:
CREATE TABLE dbo.lnk (
node_from INT NOT NULL,
node_to INT NOT NULL,
CONSTRAINT CHK_self_link CHECK (node_from<>node_to),
CONSTRAINT PK_lnk_node_from_node_to PRIMARY KEY(node_from,node_to)
);
That would block inserts with node_from equal to node_to, and for rows that already exist.
The following trigger should detect cyclic references by throwing an exception if a cyclic reference is detected:
CREATE TRIGGER TRG_no_circulars_on_lnk ON dbo.lnk AFTER INSERT
AS
BEGIN
DECLARE #cd INT;
WITH det_path AS (
SELECT
anchor=i.node_from,
node_to=l.node_to,
is_cycle=CASE WHEN i.node_from/*anchor*/=l.node_to THEN 1 ELSE 0 END
FROM
inserted AS i
INNER JOIN dbo.lnk AS l ON
l.node_from=i.node_to
UNION ALL
SELECT
dp.anchor,
node_to=l.node_to,
is_cycle=CASE WHEN dp.anchor=l.node_to THEN 1 ELSE 0 END
FROM
det_path AS dp
INNER JOIN dbo.lnk AS l ON
l.node_from=dp.node_to
WHERE
dp.is_cycle=0
)
SELECT TOP 1
#cd=is_cycle
FROM
det_path
WHERE
is_cycle=1
OPTION
(MAXRECURSION 0);
IF #cd IS NOT NULL
THROW 67890, 'Insert would cause cyclic reference', 1;
END
I tested this for a limited number of inserts.
INSERT INTO dbo.lnk(node_from,node_to)VALUES(1,2); -- OK
INSERT INTO dbo.lnk(node_from,node_to)VALUES(2,3); -- OK
INSERT INTO dbo.lnk(node_from,node_to)VALUES(3,4); -- OK
And
INSERT INTO dbo.lnk(node_from,node_to)VALUES(2,3); -- PK violation
INSERT INTO dbo.lnk(node_from,node_to)VALUES(1,1); -- Check constraint violation
INSERT INTO dbo.lnk(node_from,node_to)VALUES(3,2); -- Exception: Insert would cause cyclic reference
INSERT INTO dbo.lnk(node_from,node_to)VALUES(3,1); -- Exception: Insert would cause cyclic reference
INSERT INTO dbo.lnk(node_from,node_to)VALUES(4,1); -- Exception: Insert would cause cyclic reference
It also detects cyclic references already present in the inserted rows if inserting more than one row at once, or if a path longer than one edge would be introduced in the graph. Going off on the same initial inserts:
INSERT INTO dbo.lnk(node_from,node_to)VALUES(8,9),(9,8); -- Exception: Insert would cause cyclic reference
INSERT INTO dbo.lnk(node_from,node_to)VALUES(4,5),(5,6),(6,1); -- Exception: Insert would cause cyclic reference
EDIT: handle multi-record inserts, moved logic in separate function
I have considered a procedural approach, it is very fast and almost independent from number of records in link table and graph "density"
I have tested it on a table with 10'000 links with nodes values from 1 to 1000.
It is really really fast an do not suffer of link table dimension or "density"
In addition, the function could be used to test values before insert or (for example) if you don't want to use a trigger at all an move test logic to the client.
Consideration on Recursive CTE: BE CAREFUL!
I have tested the accepted answer on my test table (10k rows) but after 25 minutes, I have cancelled the insert operation of one single row because the query was hung with no result...
Downsizing the table to 5k rows the insert of a single record can last up to 2-3 minutes.
It is very dependant from the "population" of the graph. If you insert a new path, or you are adding a node to a path with low "ramification" it is quite fast but you have no control over that.
When the graph will be more "dense" this solution will blow up in your face.
Consider your needs very carefully.
So, let's see how to..
First of all I have set the PK of table to both columns and added an index on second column for full coverage. (the CHECK on FromNodeId<>ToNodeId is not needed cause the algorithm already cover this case).
CREATE TABLE [dbo].[Link](
[FromNodeId] [int] NOT NULL,
[ToNodeId] [int] NOT NULL,
CONSTRAINT [PK_Link] PRIMARY KEY CLUSTERED ([FromNodeId],[ToNodeId])
)
GO
CREATE NONCLUSTERED INDEX [ToNodeId] ON [dbo].[Link] ([ToNodeId])
GO
Then I have built a function to test the validity of a single link:
drop function fn_test_link
go
create function fn_test_link(#f int, #t int)
returns int
as
begin
--SET NOCOUNT ON
declare #p table (id int identity primary key, l int, t int, unique (l,t,id))
declare #r int = 0
declare #i int = 0
-- link is not self-referencing
if #f<>#t begin
-- there are links that starts from where new link wants to end (possible cycle)
if exists(select 1 from link where fromnodeid=#t) begin
-- PAY ATTENTION.. HERE LINK TABLE ALREADY HAVE ALL RECORDS ADDED (ALSO NEW ONES IF PROCEDURE IS CALLED FROM A TRIGGER AFTER INSERT)
-- LOAD ALL THE PATHS TOUCHED BY DESTINATION OF TEST NODE
set #i = 0
insert into #p
select distinct #i, ToNodeId
from link
where fromnodeid=#t
set #i = 1
-- THERE IS AT LEAST A STEP TO FOLLOW DOWN THE PATHS
while exists(select 1 from #p where l=#i-1) begin
-- LOAD THE NEXT STEP FOR ALL THE PATHS TOUCHED
insert into #p
select distinct #i, l.ToNodeId
from link l
join #p p on p.l = #i-1 and p.t = l.fromnodeid
-- CHECK IF THIS STEP HAVE REACHED THE TEST NODE START
if exists(select 1 from #p where l=#i and t=#f) begin
-- WE ARE EATING OUR OWN TAIL! CIRCULAR REFERENCE FOUND
set #r = -1
break
end
-- THE NODE IS STILL GOOD
-- DELETE FROM LIST DUPLICATED ALREADY TESTED PATHS
-- (THIS IS A BIG OPTIMIZATION, WHEN PATHS CROSSES EACH OTHER YOU RISK TO TEST MANY TIMES SAME PATHS)
delete p
from #p p
where l = #i
and (exists(select 1 from #p px where px.l < p.l and px.t = p.t))
set #i = #i + 1
end
if #r<0
-- a circular reference was found
set #r = 0
else
-- no circular reference was found
set #r = 1
end else begin
-- THERE ARE NO LINKS THAT STARTS FROM TESTED NODE DESTINATIO (CIRCULAR REFERENCE NOT POSSIBLE)
set #r = 1
end
end; -- link is not self-referencing
--select * from #p
return #r
end
GO
Now let's call it from a trigger.
If more than a row will be inserted the trigger will test each link against the whole insert (old table + new recs), if all are valid and the final table will be consistent the insert will complete, if one of them is not valid the insert will abort.
DROP TRIGGER tr_test_circular_reference
GO
CREATE TRIGGER tr_test_circular_reference ON link AFTER INSERT
AS
BEGIN
SET NOCOUNT ON
declare #p table (id int identity primary key, l int, f int, t int)
declare #f int = 0
declare #t int = 0
declare #n int = 0
declare #i int = 1
declare #ins table (id int identity primary key, f int, t int)
insert into #ins select * from inserted
set #n = ##ROWCOUNT;
-- there are links to insert
while #i<=#n begin
-- load link
select #f=f, #t=t from #ins where id = #i
if dbo.fn_test_link(#f, #t)=0 begin
declare #m nvarchar(255)
set #m = formatmessage('Insertion of link (%d,%d) would cause circular reference (n.%d)', #f, #t, #i);
THROW 50000, #m, 1
end
set #i = #i + 1
end
END
GO
I hope this will help

Identity key counter increment by one although it is in TRY Catch and Transaction is roll-backed ? SSMS 2008

Identity counter increment by one although it is in TRY Catch and Transaction is roll-backed ? SSMS 2008 is there any way i can stop it +1 or rollback it too.
In order to understand why this happened, Let's execute below sample code first-
USE tempdb
CREATE TABLE dbo.Sales
(ID INT IDENTITY(1,1), Address VARCHAR(200))
GO
BEGIN TRANSACTION
INSERT DBO.Sales
( Address )
VALUES ( 'Dwarka, Delhi' );
ROLLBACK TRANSACTION
Now, Execution plan for above query is-
The second last operator from right Compute Scalar is computing value for [Expr1003]=getidentity((629577281),(2),NULL) which is IDENTITY value for ID column. So this clearly indicates that IDENTITY values are fetched & Incremented prior to Insertion (INSERT Operator). So its by nature that even transaction rollback at later stage once created IDENTITY value is there.
Now, in order to reseed the IDENTITY value to Maximum Identity Value present in table + 1, you need sysadmin permission to execute below DBCC command -
DBCC CHECKIDENT
(
table_name
[, { NORESEED | { RESEED [, new_reseed_value ] } } ]
)
[ WITH NO_INFOMSGS ]
So the final query should include below piece of code prior to rollback statement:-
-- Code to check max ID value, and verify it again IDENTITY SEED
DECLARE #MaxValue INT = (SELECT ISNULL(MAX(ID),1) FROM dbo.Sales)
IF #MaxValue IS NOT NULL AND #MaxValue <> IDENT_CURRENT('dbo.Sales')
DBCC CHECKIDENT ( 'dbo.Sales', RESEED, #MaxValue )
--ROLLBACK TRANSACTION
So it is recommended to leave it on SQL Server.
You are right and the following code inserts record with [Col01] equal to 2:
CREATE TABLE [dbo].[DataSource]
(
[Col01] SMALLINT IDENTITY(1,1)
,[Col02] TINYINT
);
GO
BEGIN TRY
BEGIN TRANSACTION;
INSERT INTO [dbo].[DataSource] ([Col02])
VALUES (1);
SELECT 1/0
END TRY
BEGIN CATCH
IF ##TRANCOUNT > 0
BEGIN
ROLLBACK TRANSACTION
END;
END CATCH;
GO
INSERT INTO [dbo].[DataSource] ([Col02])
VALUES (1);
SELECT *
FROM [dbo].[DataSource]
This is by design (as you can see in the documentation:
Consecutive values after server restart or other failures –SQL Server
might cache identity values for performance reasons and some of the
assigned values can be lost during a database failure or server
restart. This can result in gaps in the identity value upon insert. If
gaps are not acceptable then the application should use its own
mechanism to generate key values. Using a sequence generator with the
NOCACHE option can limit the gaps to transactions that are never
committed.
I try using NOCACHE sequence but it does not work on SQL Server 2012:
CREATE TABLE [dbo].[DataSource]
(
[Col01] SMALLINT
,[Col02] TINYINT
);
CREATE SEQUENCE [dbo].[MyIndentyty]
START WITH 1
INCREMENT BY 1
NO CACHE;
GO
BEGIN TRY
BEGIN TRANSACTION;
INSERT INTO [dbo].[DataSource] ([Col01], [Col02])
SELECT NEXT VALUE FOR [dbo].[MyIndentyty], 1
SELECT 1/0
END TRY
BEGIN CATCH
IF ##TRANCOUNT > 0
BEGIN
ROLLBACK TRANSACTION
END;
END CATCH;
GO
INSERT INTO [dbo].[DataSource] ([Col01], [Col02])
SELECT NEXT VALUE FOR [dbo].[MyIndentyty], 1
SELECT *
FROM [dbo].[DataSource]
DROP TABLE [dbo].[DataSource];
DROP SEQUENCE [dbo].[MyIndentyty];
You can use MAX to solve this:
CREATE TABLE [dbo].[DataSource]
(
[Col01] SMALLINT
,[Col02] TINYINT
);
BEGIN TRY
BEGIN TRANSACTION;
DECLARE #Value SMALLINT = (SELECT MAX([Col01]) FROM [dbo].[DataSource]);
INSERT INTO [dbo].[DataSource] ([Col01], [Col02])
SELECT #Value, 1
SELECT 1/0
END TRY
BEGIN CATCH
IF ##TRANCOUNT > 0
BEGIN
ROLLBACK TRANSACTION
END;
END CATCH;
GO
DECLARE #Value SMALLINT = ISNULL((SELECT MAX([Col01]) FROM [dbo].[DataSource]), 1);
INSERT INTO [dbo].[DataSource] ([Col01], [Col02])
SELECT #Value, 1
SELECT *
FROM [dbo].[DataSource]
DROP TABLE [dbo].[DataSource];
But you must pay attentions to your isolation level for potential issues:
If you want to insert many rows at the same time, do the following:
get the current max value
create table where to store the rows (that are going to be inserted) generating ranking (you can use identity column, you can use ranking function) and adding the max value to it
insert the rows

SQL Server : Query not running as needed

I am working with Sage Evolution and do a lot of the back end stuff to customize it for our company.
I need to write a query where, when a user enters a negative quantity the system must not allow the transaction, however when the user enters a negative quantity and the product belongs to the "chemicals" group it needs to process the transaction.
Here is my code I have written so far.
DECLARE
#iAfterfQuantity Int;
#iAfteriStockCodeID Int;
#iAfterStockItemGroup VarChar
SELECT
#iAfterfQuantity = fQuantity,
#iAfteriStockCodeID = iStockCodeID
FROM
INSERTED
SELECT
#iAfterStockItemGroup = ItemGroup
FROM
dbo.stkItem
WHERE
StockLink = #iAfteriStockCodeID
BEGIN
IF #iAfterfQuantity < 0 AND #iAfterStockItemGroup <> 'chemicals'
BEGIN
RAISERROR ('',16,1)
ROLLBACK TRANSACTION
END
END
This is a task better suited for a check constraint then for a trigger, especially considering the fact that you are raising an error.
First, create the check function:
CREATE FUNCTION fn_FunctionName
(
#iAfterfQuantity Int,
#iAfteriStockCodeID Int
)
RETURNS bit
AS
BEGIN
DECLARE #iAfterStockItemGroup VarChar(150) -- Must specify length!
SELECT #iAfterStockItemGroup = ItemGroup FROM dbo.stkItem WHERE StockLink=#iAfteriStockCodeID
IF #iAfterfQuantity < 0 AND #iAfterStockItemGroup <> 'chemicals'
RETURN 0
RETURN 1 -- will be executed only if the condition is false...
END
Then, alter your table to add the check constraint:
ALTER TABLE YourTableName
ADD CONSTRAINT ck_ConstraintName
CHECK (dbo.fn_FunctionName(fQuantity, iStockCodeID) = 1)
GO

Why I can not insert or update valid data after Instead of Trigger has been fired?

Here is my simple test trigger transactions:
First, I designed a table:
CREATE TABLE T2_Score (
UserID INT Primary key,
Months INT,
Score INT
);
Then, create an instead of trigger to set the restractions to make sure the value of Months should be between 1 and 12.
CREATE TRIGGER T2_Score_Months_Restriction
ON T2_Score
INSTEAD OF UPDATE, INSERT
AS
IF ((SELECT Months FROM inserted) > 12)
BEGIN
PRINT ('Month must be between 1 and 12!')
ROLLBACK TRAN
END
But, the issue is I can not insert any valid values if the trigger has been fired once.
For example:
INSERT INTO T2_Score VALUES (11,15,18);
And inserted filed if I try the valid value(No warning notes, but file to insert value into table), for example:
INSERT INTO T2_Score VALUES (11,12,18);
Can someone explain why and how to modify my code? Thanks!!
Wouldn't it be much simpler to use a constraint?
CREATE TABLE T2_Score (
UserID INT Primary key,
Months INT,
Score INT,
CONSTRAINT CHK_MONTHS_1_12 CHECK (Months BETWEEN 1 AND 12)
);
That trigger will never insert any rows. It is an INSTEAD OF trigger that doesn't have any INSERT statement.
You need:
CREATE TRIGGER T2_Score_Months_Restriction
ON T2_Score
INSTEAD OF UPDATE, INSERT
AS
IF (SELECT MAX(Months) FROM inserted) > 12 BEGIN
PRINT ('Month must be between 1 and 12!') ;
ROLLBACK TRAN ;
END
ELSE BEGIN
INSERT INTO T2_Score SELECT * FROM inserted ;
END ;
Or perhaps:
CREATE TRIGGER T2_Score_Months_Restriction
ON T2_Score
INSTEAD OF UPDATE, INSERT
AS
INSERT INTO T2_Score
SELECT * FROM inserted WHERE Months BETWEEN 1 AND 12 ;
The second versions allows partial inserts.
INSERT INTO T2_Score VALUES ( 4, 4, 4 ), ( 13, 13, 13 );
-- With the first trigger, this will insert 0 rows.
-- With the second trigger, this will insert 1 row.
The instead of trigger does not work in MySQL. You can use a BEFORE INSERT and BEFORE update and abort the action:
CREATE TRIGGER T2_Score_Months_Restriction
ON T2_Score
BEFORE INSERT,UPDATE
AS
IF ((SELECT Months FROM inserted) > 12)
BEGIN
DECLARE msg VARCHAR(255);
IF (SomeTestToFail = "FAIL!") THEN
set msg = "Month must be between 1 and 12!";
SIGNAL SQLSTATE '45000' SET MESSAGE_TEXT = msg;
END IF;
END;
Your INSERT will be ecexuted atomically. It will succeed as a whole, or it will fail as a whole. You might be able to work around that by just deleting invalid values from inserted, but I don't recommend it.
It's a really bad idea to allow a SQL statement to partially succeed when part of it fails.

SQL Server - Auto-incrementation that allows UPDATE statements

When adding an item in my database, I need it to auto-determine the value for the field DisplayOrder. Identity (auto-increment) would be an ideal solution, but I need to be able to programmatically change (UPDATE) the values of the DisplayOrder column, and Identity doesn't seem to allow that. For the moment, I use this code:
CREATE PROCEDURE [dbo].[AddItem]
AS
DECLARE #DisplayOrder INT
SET #DisplayOrder = (SELECT MAX(DisplayOrder) FROM [dbo].[MyTable]) + 1
INSERT INTO [dbo].[MyTable] ( DisplayOrder ) VALUES ( #DisplayOrder )
Is it the good way to do it or is there a better/simpler way?
A solution to this issue from "Inside Microsoft SQL Server 2008: T-SQL Querying"
CREATE TABLE dbo.Sequence(
val int IDENTITY (10000, 1) /*Seed this at whatever your current max value is*/
)
GO
CREATE PROC dbo.GetSequence
#val AS int OUTPUT
AS
BEGIN TRAN
SAVE TRAN S1
INSERT INTO dbo.Sequence DEFAULT VALUES
SET #val=SCOPE_IDENTITY()
ROLLBACK TRAN S1 /*Rolls back just as far as the save point to prevent the
sequence table filling up. The id allocated won't be reused*/
COMMIT TRAN
Or another alternative from the same book that allocates ranges easier. (You would need to consider whether to call this from inside or outside your transaction - inside would block other concurrent transactions until the first one commits)
CREATE TABLE dbo.Sequence2(
val int
)
GO
INSERT INTO dbo.Sequence2 VALUES(10000);
GO
CREATE PROC dbo.GetSequence2
#val AS int OUTPUT,
#n as int =1
AS
UPDATE dbo.Sequence2
SET #val = val = val + #n;
SET #val = #val - #n + 1;
You can set your incrementing column to use the identity property. Then, in processes that need to insert values into the column you can use the SET IDENITY_INSERT command in your batch.
For inserts where you want to use the identity property, you exclude the identity column from the list of columns in your insert statement:
INSERT INTO [dbo].[MyTable] ( MyData ) VALUES ( #MyData )
When you want to insert rows where you are providing the value for the identity column, use the following:
SET IDENTITY_INSERT MyTable ON
INSERT INTO [dbo].[MyTable] ( DisplayOrder, MyData )
VALUES ( #DisplayOrder, #MyData )
SET IDENTITY_INSERT MyTable OFF
You should be able to UPDATE the column without any other steps.
You may also want to look into the DBCC CHECKIDENT command. This command will set your next identity value. If you are inserting rows where the next identity value might not be appropriate, you can use the command to set a new value.
DECLARE #DisplayOrder INT
SET #DisplayOrder = (SELECT MAX(DisplayOrder) FROM [dbo].[MyTable]) + 1
DBCC CHECKIDENT (MyTable, RESEED, #DisplayOrder)
Here's the solution that I kept:
CREATE PROCEDURE [dbo].[AddItem]
AS
DECLARE #DisplayOrder INT
BEGIN TRANSACTION
SET #DisplayOrder = (SELECT ISNULL(MAX(DisplayOrder), 0) FROM [dbo].[MyTable]) + 1
INSERT INTO [dbo].[MyTable] ( DisplayOrder ) VALUES ( #DisplayOrder )
COMMIT TRANSACTION
One thing you should do is to add commands so that your procedure's run as a transaction, otherwise two inserts running at the same time could produce two rows with the same value in DisplayOrder.
This is easy enough to achieve: add
begin transaction
at the start of your procedure, and
commit transaction
at the end.
You way works fine (with a little modification) and is simple. I would wrap it in a transaction like #David Knell said. This would result in code like:
CREATE PROCEDURE [dbo].[AddItem]
AS
DECLARE #DisplayOrder INT
BEGIN TRANSACTION
SET #DisplayOrder = (SELECT MAX(DisplayOrder) FROM [dbo].[MyTable]) + 1
INSERT INTO [dbo].[MyTable] ( DisplayOrder ) VALUES ( #DisplayOrder )
COMMIT TRANSACTION
Wrapping your SELECT & INSERT in a transaction guarantees that your DisplayOrder values won't be duplicated by AddItem. If you are doing a lot of simultaneous adding (many per second), there may be contention on MyTable but for occasional inserts it won't be a problem.

Resources