SQL Server: statement in a batch all execute as one transaction? - sql-server

So I ran a SQL batch with a bunch of UPDATE statements:
UPDATE...;
UPDATE...;
UPDATE...;
Turns out the batch ran as a single transactions consequently many locks were created (our database monitoring software clearly shows that it was one big transaction) so apparently its similar to me writing BEGIN...COMMIT around the batch:
BEGIN TRANS;
UPDATE...;
UPDATE...;
UPDATE...;
COMMIT;
Is that really the case? Are batches always executed as one transaction? (I am not in 'explicit transaction' mode) and why is that, and can it be configured or is that just the behavior of SQL Server?
Would it change anything if I wrap every statement in a BEGIN..COMMIT, something like:
BEGIN TRANS;
UPDATE...;
COMMIT;
BEGIN TRANS;
UPDATE...;
COMMIT;
BEGIN TRANS;
UPDATE...;
COMMIT;
Thank You!

Assuming that you are executing your UPDATEs from a SSMS Query Window, you can refactor your code to
BEGIN TRANS;
UPDATE...;
GO
UPDATE...;
GO
UPDATE...;
GO
COMMIT;
GO WILL make each UPDATE execute prior in its own batch in the order called, but won't be committed until the final COMMIT is called.
I have tested this via the following example that you can run for yourself in SSMS.
SET NOCOUNT ON
GO
IF EXISTS ( SELECT * FROM sys.tables WHERE name = 'BatchTest' )
DROP TABLE dbo.BatchTest;
GO
CREATE TABLE dbo.BatchTest (
id INT IDENTITY (1,1) PRIMARY KEY NOT NULL
, col1 VARCHAR(50)
, col2 VARCHAR(50)
, col3 VARCHAR(50)
, col4 VARCHAR(50)
)
GO
DECLARE #i INT = 1;
WHILE ( #i <= 100 ) BEGIN
INSERT INTO dbo.BatchTest ( col1 ) VALUES ( 'col1_' + CAST ( #i AS VARCHAR(10) ) );
SET #i += 1;
END
GO
SELECT * FROM dbo.BatchTest ORDER BY id;
/*
-- test tran/commit --
-- manually select and execute the code below after the BatchTest has been created and populated with data --
SET NOCOUNT ON
GO
BEGIN TRAN;
UPDATE dbo.BatchTest SET col2 = 'col2_' + CAST ( id AS VARCHAR(10) );
GO
UPDATE dbo.BatchTest SET col3 = 'col3_' + CAST ( id AS VARCHAR(10) );
GO
UPDATE dbo.BatchTest SET col4 = 'col4_' + CAST ( id AS VARCHAR(10) );
GO
--COMMIT;
ROLLBACK;
SELECT * FROM dbo.BatchTest ORDER BY id;
*/
Please note the comments in the code about selecting and executing the tran/commit test.
The single ROLLBACK undoes all of the UPDATEs within the current transaction, regardless of the use of GOs, whereas COMMIT will commit them all. Obviously, you should do more your diligence and test against your data, but this might help with your locking issues.
Some follow-up reading in regard to SQL Server's use of GO and its intentions:
https://learn.microsoft.com/en-us/sql/t-sql/language-elements/sql-server-utilities-statements-go?view=sql-server-2017

Related

Using SQL Server triggers, how to keep data in two identical tables the same without going in a infinite loop?

I know that probably the best way to acomplish this would be to make some changes in the application code to save all changes in both tables, but the company ordered to make this happens with database logic, using triggers. So, there are two different databases, both with a table named User, and they both have the same model already. What I insert/update in User on database X have to be inserted/updated in table User on database Y, and vice-versa.
I managed to make the insert and update trigger going in one direction (database X -> database Y), but now i'm thinking that when I create the trigger on database Y, a loop would happen. What is missing or what can I do to make the trigger loop not happen?
This is what I have created for now on one of the databases:
---insert trigger
USE [DATABASE_Y]
CREATE TRIGGER [dbo].[TR_USER_OnInsert] ON [dbo].[USER]
AFTER INSERT
AS
BEGIN
SET NOCOUNT ON
INSERT INTO [DATABASE_X].[dbo].[USER] (
usu_id,
usu_name,
usu_block,
usu_login,
usu_password
)
SELECT
usu_id,
usu_name,
usu_block,
usu_login,
usu_password
FROM INSERTED
SET NOCOUNT OFF
END
---update trigger
USE [DATABASE_Y]
ALTER TRIGGER [dbo].[TR_USER_OnUpdate] ON [dbo].[USER]
AFTER UPDATE
AS
BEGIN
SET NOCOUNT ON
UPDATE X
SET
X.usu_id = INSERTED.usu_id,
X.usu_name = INSERTED.usu_name,
X.usu_block = INSERTED.usu_block,
X.usu_login = INSERTED.usu_login,
X.usu_password = INSERTED.usu_password
FROM [DATABASE_X].[dbo].[USER] X
INNER JOIN inserted ON X.usu_login = inserted.usu_login
SET NOCOUNT OFF
END
Interesting approach. There are a number of different ways to solve this problem, most of them are better than using triggers.
That said...
As part of your triggered code, create some variables in memory then run a select on the target database to fill them before running your update / insert. Use some simple logic to check that the values aren't already set to those same values, and avoid getting into a loop.
something like this for your update trigger:
USE [DATABASE_Y]
ALTER TRIGGER [dbo].[TR_USER_OnUpdate] ON [dbo].[USER]
AFTER UPDATE
AS
BEGIN
SET NOCOUNT ON
DECLARE #id as varchar(100)
DECLARE #name as varchar(100)
DECLARE #block as varchar(100)
DECLARE #login as varchar(100)
DECLARE #password as varchar(100)
SELECT #id = X.usu_id,
#name = X.usu_name,
#block = X.usu_block,
#login = X.usu_login,
#password = X.usu_password
FROM [DATABASE_X].[dbo].[USER] X
WHERE X.usu_login = inserted.usu_login
IF #id <> INSERTED.usu_id
OR #name <> INSERTED.usu_name
OR #block <> INSERTED.usu_block
OR #login <> INSERTED.usu_login
OR #password <> INSERTED.usu_password
BEGIN
UPDATE X
SET
X.usu_id = INSERTED.usu_id,
X.usu_name = INSERTED.usu_name,
X.usu_block = INSERTED.usu_block,
X.usu_login = INSERTED.usu_login,
X.usu_password = INSERTED.usu_password
FROM [DATABASE_X].[dbo].[USER] X
INNER JOIN inserted ON X.usu_login = inserted.usu_login
END
SET NOCOUNT OFF
END
I don't know what your datatypes are so I just assumed varchar(100) for everything.
Make sure that all of the columns are NOT NULL. If you have any Nullable columns, make sure you add ISNULL logic to the variable comparisons.
The insert trigger is a little easier, since you just need to check that the userId you're trying to insert doesn't already exist:
USE [DATABASE_Y]
CREATE TRIGGER [dbo].[TR_USER_OnInsert] ON [dbo].[USER]
AFTER INSERT
AS
BEGIN
SET NOCOUNT ON
IF NOT EXISTS(SELECT 1 FROM [DATABASE_X].[dbo].[USER] WHERE usu_login = INSERTED.usu_login)
BEGIN
INSERT INTO [DATABASE_X].[dbo].[USER] (
usu_id,
usu_name,
usu_block,
usu_login,
usu_password
)
SELECT
usu_id,
usu_name,
usu_block,
usu_login,
usu_password
FROM INSERTED
END
SET NOCOUNT OFF
END

How to get and use the value returned by a stored procedure to a INSERT INTO... SELECT... statement

I am just new in SQL language and still studying it. I'm having hard time looking for answer on how can I use the stored procedure and insert value into a table.
I have this stored procedure:
CREATE PROCEDURE TestID
AS
SET NOCOUNT ON;
BEGIN
DECLARE #NewID VARCHAR(30),
#GenID INT,
#BrgyCode VARCHAR(5) = '23548'
SET #GenID = (SELECT TOP (1) NextID
FROM dbo.RandomIDs
WHERE IsUsed = 0
ORDER BY RowNumber)
SET #NewID = #BrgyCode + '-' + CAST(#GenID AS VARCHAR (30))
UPDATE dbo.RandomIDs
SET dbo.RandomIDs.IsUsed = 1
WHERE dbo.RandomIDs.NextID = #GenID
SELECT #NewID
END;
and what I'm trying to do is this:
INSERT INTO dbo.Residents([ResidentID], NewResidentID, [ResLogdate],
...
SELECT
[ResidentID],
EXEC TestID ,
[ResLogdate],
....
FROM
source.dbo.Resident;
There is a table dbo.RandomIDs containing random 6 digit non repeating numbers where I'm pulling out the value via the stored procedure and updating the IsUsed column of the table to 1. I'm transferring data from one database to another database and doing some processing on the data while transferring. Part of the processing is generating a new ID with the required format.
But I can't get it to work Sad I've been searching the net for hours now but I'm not getting the information that I need and that the reason for my writing. I hope someone could help me with this.
Thanks,
Darren
your question is little bit confusing, because you have not explained what you want to do. As i got your question, you want to fetch random id from randomids table and after performed some processing on nextid you want to insert it into resident table [newresidentid] and end of the procedure you fetch data from resident table. if i get anything wrong feel free to ask me.
your procedure solution is following.
CREATE PROCEDURE [TestId]
AS
SET NOCOUNT ON;
BEGIN
DECLARE #NEWID NVARCHAR(30)
DECLARE #GENID BIGINT
DECLARE #BRGYCODE VARCHAR(5) = '23548'
DECLARE #COUNT INTEGER
DECLARE #ERR NVARCHAR(20) = 'NO IDS IN RANDOM ID'
SET #COUNT = (SELECT COUNT(NEXTID) FROM RandomIds WHERE [IsUsed] = 0)
SET #GENID = (SELECT TOP(1) [NEXTID] FROM RandomIds WHERE [IsUsed] = 0 ORDER BY [ID] ASC)
--SELECT #GENID AS ID
IF #COUNT = 0
BEGIN
SELECT #ERR AS ERROR
END
ELSE
BEGIN
SET #NEWID = #BRGYCODE + '-' + CAST(#GENID AS varchar(30))
UPDATE RandomIds SET [IsUsed] = 1 WHERE [NextId] = #GENID
INSERT INTO Residents ([NewResidentId] , [ResLogDate] ) VALUES (#NEWID , GETDATE())
SELECT * FROM Residents
END
END
this procedure will fetch data from your randomids table and perform some processing on nextid than after it directs insert it into resident table and if you want to insert some data through user you can use parameter after declaring procedure name
E.G
CREATE PROCEDURE [TESTID]
#PARAM1 DATATYPE,
#PARAM2 DATATYPE
AS
BEGIN
END
I'm not convinced that your requirement is a good one but here is a way to do it.
Bear in mind that concurrent sessions will not be able to read your update until it is committed so you have to kind of "lock" the update so you will get a block until you're going to commit or rollback. This is rubbish for concurrency, but that's a side effect of this requirement.
declare #cap table ( capturedValue int);
declare #GENID int;
update top (1) RandomIds set IsUsed=1
output inserted.NextID into #cap
where IsUsed=0;
set #GENID =(select max( capturedValue) from #cap )
A better way would be to use an IDENTITY or SEQUENCE to solve your problem. This would leave gaps but help concurrency.

SQL Trigger Inconsistently firing

I have a SQL Trigger on a table that works... most of the time. And I cannot figure out why sometimes the fields are NULL
The trigger works by Updateing the LastUpdateTime whenever something is modified in the field, and the InsertDatetime when first Created.
For some reason this only seems to work some times.
ALTER TRIGGER [dbo].[DateTriggerTheatreListHeaders]
ON [dbo].[TheatreListHeaders]
AFTER INSERT,UPDATE
AS
BEGIN
SET NOCOUNT ON;
IF NOT EXISTS(SELECT * FROM DELETED)
BEGIN
UPDATE ES
SET InsertDatetime = Getdate()
,LastUpdateDateTime = Getdate()
FROM TheatreListHeaders es
JOIN Inserted I ON es.UNIQUETHEATRELISTNUMBER = I.UNIQUETHEATRELISTNUMBER
END
IF UPDATE(LastUpdateDateTime) OR UPDATE(InsertDatetime)
RETURN;
IF EXISTS (
SELECT
*
FROM
INSERTED I
JOIN
DELETED D
-- make sure to compare inserted with (same) deleted person
ON D.UNIQUETHEATRELISTNUMBER = I.UNIQUETHEATRELISTNUMBER
)
BEGIN
UPDATE ES
SET InsertDatetime = ISNULL(es.Insertdatetime,Getdate())
,LastUpdateDateTime = Getdate()
FROM TheatreListHeaders es
JOIN Inserted I ON es.UNIQUETHEATRELISTNUMBER = I.UNIQUETHEATRELISTNUMBER
END
END
A much simpler and efficient approach to do what you are trying to do, would be something like...
ALTER TRIGGER [dbo].[DateTriggerTheatreListHeaders]
ON [dbo].[TheatreListHeaders]
AFTER INSERT,UPDATE
AS
BEGIN
SET NOCOUNT ON;
--Determine if this is an INSERT OR UPDATE Action .
DECLARE #Action as char(1);
SET #Action = (CASE WHEN EXISTS(SELECT * FROM INSERTED)
AND EXISTS(SELECT * FROM DELETED)
THEN 'U' -- Set Action to Updated.
WHEN EXISTS(SELECT * FROM INSERTED)
THEN 'I' -- Set Action to Insert.
END);
UPDATE ES
SET InsertDatetime = CASE WHEN #Action = 'U'
THEN ISNULL(es.Insertdatetime,Getdate())
ELSE Getdate()
END
,LastUpdateDateTime = Getdate()
FROM TheatreListHeaders es
JOIN Inserted I ON es.UNIQUETHEATRELISTNUMBER = I.UNIQUETHEATRELISTNUMBER;
END
"If update()" is poorly defined/implemented in sql server IMO. It does not do what is implied. The function only determines if the column was set by a value in the triggering statement. For an insert, every column is implicitly (if not explicitly) assigned a value. Therefore it is not useful in an insert trigger and difficult to use in a single trigger that supports both inserts and updates. Sometimes it is better to write separate triggers.
Are you aware of recursive triggers? An insert statement will execute your trigger which updates the same table. This causes the trigger to execute again, etc. Is the (database) recursive trigger option off (which is typical) or adjust your logic to support that?
What are your expectations for the insert/update/merge statements against this table? This goes back to your requirements. Is the trigger to ignore any attempt to set the datetime columns directly and set them within the trigger always?
And lastly, what exactly does "works sometimes" actually mean? Do you have a test case that reproduces your issue. If you don't, then you can't really "fix" the logic without a specific failure case. But the above comments should give you sufficient clues. To be honest, your logic seems to be overly complicated. I'll add that it also is logically flawed in the way that it set insertdatetime to getdate if the existing value is null during an update. IMO, it should reject any update that attempts to set the value to null because that is overwriting a fact that should never change. M.Ali has provided an example that is usable but includes the created timestamp problem. Below is an example that demonstrates a different path (assuming the recursive trigger option is off). It does not include the rejection logic - which you should consider. Notice the output of the merge execution carefully.
use tempdb;
set nocount on;
go
create table zork (id integer identity(1, 1) not null primary key,
descr varchar(20) not null default('zippy'),
created datetime null, modified datetime null);
go
create trigger zorktgr on zork for insert, update as
begin
declare #rc int = ##rowcount;
if #rc = 0 return;
set nocount on;
if update(created)
select 'created column updated', #rc as rc;
else
select 'created column NOT updated', #rc as rc;
if exists (select * from deleted) -- update :: do not rely on ##rowcount
update zork set modified = getdate()
where exists (select * from inserted as ins where ins.id = zork.id);
else
update zork set created = getdate(), modified = getdate()
where exists (select * from inserted as ins where ins.id = zork.id);
end;
go
insert zork default values;
select * from zork;
insert zork (descr) values ('bonk');
select * from zork;
update zork set created = null, descr = 'upd #1' where id = 1;
select * from zork;
update zork set descr = 'upd #2' where id = 1;
select * from zork;
waitfor delay '00:00:02';
merge zork as tgt
using (select 1 as id, 'zippity' as descr union all select 5, 'who me?') as src
on tgt.id = src.id
when matched then update set descr = src.descr
when not matched then insert (descr) values (src.descr)
;
select * from zork;
go
drop table zork;

Identity key counter increment by one although it is in TRY Catch and Transaction is roll-backed ? SSMS 2008

Identity counter increment by one although it is in TRY Catch and Transaction is roll-backed ? SSMS 2008 is there any way i can stop it +1 or rollback it too.
In order to understand why this happened, Let's execute below sample code first-
USE tempdb
CREATE TABLE dbo.Sales
(ID INT IDENTITY(1,1), Address VARCHAR(200))
GO
BEGIN TRANSACTION
INSERT DBO.Sales
( Address )
VALUES ( 'Dwarka, Delhi' );
ROLLBACK TRANSACTION
Now, Execution plan for above query is-
The second last operator from right Compute Scalar is computing value for [Expr1003]=getidentity((629577281),(2),NULL) which is IDENTITY value for ID column. So this clearly indicates that IDENTITY values are fetched & Incremented prior to Insertion (INSERT Operator). So its by nature that even transaction rollback at later stage once created IDENTITY value is there.
Now, in order to reseed the IDENTITY value to Maximum Identity Value present in table + 1, you need sysadmin permission to execute below DBCC command -
DBCC CHECKIDENT
(
table_name
[, { NORESEED | { RESEED [, new_reseed_value ] } } ]
)
[ WITH NO_INFOMSGS ]
So the final query should include below piece of code prior to rollback statement:-
-- Code to check max ID value, and verify it again IDENTITY SEED
DECLARE #MaxValue INT = (SELECT ISNULL(MAX(ID),1) FROM dbo.Sales)
IF #MaxValue IS NOT NULL AND #MaxValue <> IDENT_CURRENT('dbo.Sales')
DBCC CHECKIDENT ( 'dbo.Sales', RESEED, #MaxValue )
--ROLLBACK TRANSACTION
So it is recommended to leave it on SQL Server.
You are right and the following code inserts record with [Col01] equal to 2:
CREATE TABLE [dbo].[DataSource]
(
[Col01] SMALLINT IDENTITY(1,1)
,[Col02] TINYINT
);
GO
BEGIN TRY
BEGIN TRANSACTION;
INSERT INTO [dbo].[DataSource] ([Col02])
VALUES (1);
SELECT 1/0
END TRY
BEGIN CATCH
IF ##TRANCOUNT > 0
BEGIN
ROLLBACK TRANSACTION
END;
END CATCH;
GO
INSERT INTO [dbo].[DataSource] ([Col02])
VALUES (1);
SELECT *
FROM [dbo].[DataSource]
This is by design (as you can see in the documentation:
Consecutive values after server restart or other failures –SQL Server
might cache identity values for performance reasons and some of the
assigned values can be lost during a database failure or server
restart. This can result in gaps in the identity value upon insert. If
gaps are not acceptable then the application should use its own
mechanism to generate key values. Using a sequence generator with the
NOCACHE option can limit the gaps to transactions that are never
committed.
I try using NOCACHE sequence but it does not work on SQL Server 2012:
CREATE TABLE [dbo].[DataSource]
(
[Col01] SMALLINT
,[Col02] TINYINT
);
CREATE SEQUENCE [dbo].[MyIndentyty]
START WITH 1
INCREMENT BY 1
NO CACHE;
GO
BEGIN TRY
BEGIN TRANSACTION;
INSERT INTO [dbo].[DataSource] ([Col01], [Col02])
SELECT NEXT VALUE FOR [dbo].[MyIndentyty], 1
SELECT 1/0
END TRY
BEGIN CATCH
IF ##TRANCOUNT > 0
BEGIN
ROLLBACK TRANSACTION
END;
END CATCH;
GO
INSERT INTO [dbo].[DataSource] ([Col01], [Col02])
SELECT NEXT VALUE FOR [dbo].[MyIndentyty], 1
SELECT *
FROM [dbo].[DataSource]
DROP TABLE [dbo].[DataSource];
DROP SEQUENCE [dbo].[MyIndentyty];
You can use MAX to solve this:
CREATE TABLE [dbo].[DataSource]
(
[Col01] SMALLINT
,[Col02] TINYINT
);
BEGIN TRY
BEGIN TRANSACTION;
DECLARE #Value SMALLINT = (SELECT MAX([Col01]) FROM [dbo].[DataSource]);
INSERT INTO [dbo].[DataSource] ([Col01], [Col02])
SELECT #Value, 1
SELECT 1/0
END TRY
BEGIN CATCH
IF ##TRANCOUNT > 0
BEGIN
ROLLBACK TRANSACTION
END;
END CATCH;
GO
DECLARE #Value SMALLINT = ISNULL((SELECT MAX([Col01]) FROM [dbo].[DataSource]), 1);
INSERT INTO [dbo].[DataSource] ([Col01], [Col02])
SELECT #Value, 1
SELECT *
FROM [dbo].[DataSource]
DROP TABLE [dbo].[DataSource];
But you must pay attentions to your isolation level for potential issues:
If you want to insert many rows at the same time, do the following:
get the current max value
create table where to store the rows (that are going to be inserted) generating ranking (you can use identity column, you can use ranking function) and adding the max value to it
insert the rows

SQL Server - Auto-incrementation that allows UPDATE statements

When adding an item in my database, I need it to auto-determine the value for the field DisplayOrder. Identity (auto-increment) would be an ideal solution, but I need to be able to programmatically change (UPDATE) the values of the DisplayOrder column, and Identity doesn't seem to allow that. For the moment, I use this code:
CREATE PROCEDURE [dbo].[AddItem]
AS
DECLARE #DisplayOrder INT
SET #DisplayOrder = (SELECT MAX(DisplayOrder) FROM [dbo].[MyTable]) + 1
INSERT INTO [dbo].[MyTable] ( DisplayOrder ) VALUES ( #DisplayOrder )
Is it the good way to do it or is there a better/simpler way?
A solution to this issue from "Inside Microsoft SQL Server 2008: T-SQL Querying"
CREATE TABLE dbo.Sequence(
val int IDENTITY (10000, 1) /*Seed this at whatever your current max value is*/
)
GO
CREATE PROC dbo.GetSequence
#val AS int OUTPUT
AS
BEGIN TRAN
SAVE TRAN S1
INSERT INTO dbo.Sequence DEFAULT VALUES
SET #val=SCOPE_IDENTITY()
ROLLBACK TRAN S1 /*Rolls back just as far as the save point to prevent the
sequence table filling up. The id allocated won't be reused*/
COMMIT TRAN
Or another alternative from the same book that allocates ranges easier. (You would need to consider whether to call this from inside or outside your transaction - inside would block other concurrent transactions until the first one commits)
CREATE TABLE dbo.Sequence2(
val int
)
GO
INSERT INTO dbo.Sequence2 VALUES(10000);
GO
CREATE PROC dbo.GetSequence2
#val AS int OUTPUT,
#n as int =1
AS
UPDATE dbo.Sequence2
SET #val = val = val + #n;
SET #val = #val - #n + 1;
You can set your incrementing column to use the identity property. Then, in processes that need to insert values into the column you can use the SET IDENITY_INSERT command in your batch.
For inserts where you want to use the identity property, you exclude the identity column from the list of columns in your insert statement:
INSERT INTO [dbo].[MyTable] ( MyData ) VALUES ( #MyData )
When you want to insert rows where you are providing the value for the identity column, use the following:
SET IDENTITY_INSERT MyTable ON
INSERT INTO [dbo].[MyTable] ( DisplayOrder, MyData )
VALUES ( #DisplayOrder, #MyData )
SET IDENTITY_INSERT MyTable OFF
You should be able to UPDATE the column without any other steps.
You may also want to look into the DBCC CHECKIDENT command. This command will set your next identity value. If you are inserting rows where the next identity value might not be appropriate, you can use the command to set a new value.
DECLARE #DisplayOrder INT
SET #DisplayOrder = (SELECT MAX(DisplayOrder) FROM [dbo].[MyTable]) + 1
DBCC CHECKIDENT (MyTable, RESEED, #DisplayOrder)
Here's the solution that I kept:
CREATE PROCEDURE [dbo].[AddItem]
AS
DECLARE #DisplayOrder INT
BEGIN TRANSACTION
SET #DisplayOrder = (SELECT ISNULL(MAX(DisplayOrder), 0) FROM [dbo].[MyTable]) + 1
INSERT INTO [dbo].[MyTable] ( DisplayOrder ) VALUES ( #DisplayOrder )
COMMIT TRANSACTION
One thing you should do is to add commands so that your procedure's run as a transaction, otherwise two inserts running at the same time could produce two rows with the same value in DisplayOrder.
This is easy enough to achieve: add
begin transaction
at the start of your procedure, and
commit transaction
at the end.
You way works fine (with a little modification) and is simple. I would wrap it in a transaction like #David Knell said. This would result in code like:
CREATE PROCEDURE [dbo].[AddItem]
AS
DECLARE #DisplayOrder INT
BEGIN TRANSACTION
SET #DisplayOrder = (SELECT MAX(DisplayOrder) FROM [dbo].[MyTable]) + 1
INSERT INTO [dbo].[MyTable] ( DisplayOrder ) VALUES ( #DisplayOrder )
COMMIT TRANSACTION
Wrapping your SELECT & INSERT in a transaction guarantees that your DisplayOrder values won't be duplicated by AddItem. If you are doing a lot of simultaneous adding (many per second), there may be contention on MyTable but for occasional inserts it won't be a problem.

Resources