Trigger reverses the changes made - SQL Server - sql-server

I'm working on an E-commerce system where I have an order table that stores all the information regarding an order. The orders go through different stages: Open, Verified, In Process, etc. And I'm keeping counts of these orders at different stages e.g. Open Orders 95, Verified 5, In Process 3, etc.
When a new order is inserted in the table, I have a trigger that increments the Open Orders by 1. Similarly, I have a trigger for updates which checks the order's previous stage and the next to decrement and increment accordingly.
The INSERT trigger is working fine as described above. But the UPDATE trigger has a weird behavior that it makes the desired changes to the Counts but then reverses the changes for some reason.
For instance, upon changing the status of an order from Open to Verified, the ideal behavior would be to decrement Open Orders by 1 and increment Verified Orders by 1. The trigger currently performed the desired action but then for some reason restores the previous value.
Here's a snippet of my trigger where I check if the order previously belonged to the Open status and is now being updated to Verified status:
IF ##ROWCOUNT = 0 RETURN
DECLARE #orderID VARCHAR(MAX) -- orderID of the order that is being updated
DECLARE #storeID VARCHAR(MAX) -- storeID of the store the order belongs to
SELECT TOP 1
#orderID = i.id,
#storeID = i.storeID
FROM
inserted AS i
INNER JOIN deleted AS d
ON i.id = d.id
-- IF from Open Order
IF EXISTS (
SELECT *
FROM
deleted
WHERE
orderStatus = 'Open' AND
id = #orderID
)
BEGIN
-- IF to Verified Order
IF EXISTS (
SELECT *
FROM
inserted
WHERE
orderStatus = 'Verified' AND
id = #orderID
)
BEGIN
UPDATE order_counts
SET
open_orders = open_orders - ##ROWCOUNT,
verified_orders = verified_orders + ##ROWCOUNT
WHERE storeID = #storeID
END
END
EDIT:
Here's some extra information which will be helpful in light of the first comment on the question:
I have a lot of records in the table so using COUNT() again and again has a lot of impact on the overall performance. This is why I'm keeping counts in a separate table. Also, I've written the trigger in a way that it handles both single record/multi record changes. I only check one row because I know in case of multiple records they will all be going through the same change of status. Hence, the decrement/increment of ##ROWCOUNT

If you can tolerate a slightly different representation of the order counts, I'd strongly suggest using an indexed view instead1:
create table dbo.Orders (
ID int not null,
OrderStatus varchar(20) not null,
constraint PK_Orders PRIMARY KEY (ID)
)
go
create view dbo.OrderCounts
with schemabinding
as
select
OrderStatus,
COUNT_BIG(*) as Cnt
from
dbo.Orders
group by OrderStatus
go
create unique clustered index IX_OrderCounts on dbo.OrderCounts (OrderStatus)
go
insert into dbo.Orders (ID,OrderStatus) values
(1,'Open'),
(2,'Open'),
(3,'Verified')
go
update dbo.Orders set OrderStatus = 'Verified' where ID = 2
go
select * from dbo.OrderCounts
Results:
OrderStatus Cnt
-------------------- --------------------
Open 1
Verified 2
This has the advantage that, whilst behind the scenes SQL Server is doing something very similar to running triggers, this code has been debugged thoroughly and is correct.
In your current attempted trigger, one further reason why the trigger is currently broken is that ##ROWCOUNT isn't "sticky" - it doesn't remember the number of rows that were affected by the original UPDATE when you're running other statements inside your trigger that also set ##ROWCOUNT.
1You can always stack a non-indexed view atop this view and perform a PIVOT if you really want the counts in a single row and in multiple columns.

The reason for this behavior is the use of ##ROWCOUNT multiple times while in reality once the results from ##ROWCOUNT is fetched, the results are cleared. Instead get the results into variable and use that variable across the trigger. Check the below scenario for the same.
CREATE DATABASE Test
USE Test
CREATE TABLE One
(
ID INT IDENTITY(1,1)
,Name NVARCHAR(MAX)
)
GO
CREATE TRIGGER TR_One ON One FOR INSERT,UPDATE
AS
BEGIN
PRINT ##ROWCOUNT
SELECT ##ROWCOUNT
END
UPDATE One
SET Name = 'Name4'
WHERE ID = 3
RESULTS :-
The Print ##ROWCOUNT statement would give a value of 1, where as the select ##ROWCOUNT would give the value of 0

Related

How to lock and access 25 records at a time

I have a console application which will be running in 3 pc. I want console application 2 & 3 will not be able to access those records which have been selected by console application1.
I found a sample code that shown some way to access table as a queue. in the below SQL working 1 data at a time but my requirement is to work with 25 data. So guide me what to change in below SQL?
I rarely work with sql server. i found code which close to my requirement but what i need to change there to get it work not very sure. so looking for suggestion.
Please see this code
DECLARE #NextID INTEGER
BEGIN TRANSACTION
-- Find the next queued item that is waiting to be processed
SELECT TOP 1 #NextID = ID
FROM MyQueueTable WITH (UPDLOCK, READPAST)
WHERE StateField = 0
ORDER BY ID ASC
-- if we've found one, mark it as being processed
IF #NextId IS NOT NULL
UPDATE MyQueueTable SET Status = 1 WHERE ID = #NextId
COMMIT TRANSACTION
-- If we've got an item from the queue, return to whatever is going to process it
IF #NextId IS NOT NULL
SELECT * FROM MyQueueTable WHERE ID = #NextID
I need to process at a time 25 records. so what I need to change in above code?
Why last select is required?
IF #NextId IS NOT NULL
SELECT * FROM MyQueueTable WHERE ID = #NextID
My first select will return data then last select is required at all? please tell me purpose of last select.
You can use UPDATE with an OUTPUT clause to output the data into a temp table in one statement. This does not require a transaction.
DROP TABLE IF EXISTS #process;
CREATE TABLE #process (ID INT);
-- Find the next queued item that is waiting to be processed
UPDATE mqt
SET Status = 1
OUTPUT inserted.ID
INTO #process (ID)
FROM (
SELECT TOP (25) *
FROM MyQueueTable WITH (READPAST)
WHERE StateField = 0
ORDER BY ID ASC
) AS mqt;
Rather than using a scalar variable, you could create a temp table that you INSERT the top 25 records into.
-- Find the next queued item that is waiting to be processed
DROP TABLE IF EXISTS #process;
CREATE TABLE #process (ID INT);
INSERT INTO #process
SELECT TOP (25) ID
FROM MyQueueTable WITH (UPDLOCK, READPAST)
WHERE StateField = 0
ORDER BY ID ASC;
-- if we've found one, mark it as being processed
IF EXISTS (SELECT TOP (1) ID FROM #process)
BEGIN
UPDATE MyQueueTable
SET Status = 1
FROM MyQueueTable AS mqt
JOIN #process AS p
ON mqt.ID = p.ID;
END;
And a couple of tips on coding practice, if I may:
be sure to end your statements with semicolons (;) to separate them and improve readability (also it will one day be a requirement)
avoid using SELECT * in deployed code. Instead, specify the columns you want to SELECT

Creating CTE vs selecting criteria in SQL

I am working on a school database project that requires a trigger-based solution for some of the optional restrictions my database has. My database model represents and online-based video viewing service where users have access to a large number of videos( similar principle as that of YouTube). Table history stores up to 100 viewed videos for every user. What my trigger is supposed to do is:
Delete previous entries for the same video and user (or change the date of viewing to current time)
Insert new entry (or update an existing one, i.e. possible step 1)
Delete any entries older than the 100th one
Here is the code i wrote:
CREATE TRIGGER [History_Update] ON dbo.History INSTEAD OF INSERT AS
BEGIN
DECLARE #user_id bigint, #video_id bigint, #history_count smallint
SELECT #user_id=user_id, #video_id=video_id FROM inserted
DELETE FROM History where user_id = #user_id AND video_id=#video_id
INSERT INTO History(user_id, video_id) VALUES (#user_id, #video_id)
SET #history_count = (select count(*) FROM History WHERE user_id= #user_id AND video_id = #video_id)
IF( #history_count >= 100)
BEGIN
WITH temp AS (SELECT TOP 1 * FROM History WHERE user_id= #user_id AND video_id=#video_id ORDER BY viewtime ASC)
DELETE FROM temp
END
END
Now, I have few questions regarding this:
Is it better to Use CTE as written above or something like this:
SET #viewtime = (SELECT TOP 1 viewtime FROM History WHERE user_id= #user_id AND video_id=#video_id ORDER BY viewtime ASC)
DELETE FROM History where user_id = #user_id AND video_id=#video_id AND viewtime = #viewtime
Also, would it be better to check if a specific user-video entry exists in History and then update the viewtime attribute. Also, since I am using INSTEAD OF trigger, would this violate any rule regarding the use of this kind of trigger since I am not sure if I understood it well. From what I read online, INSTEAD OF triggers must perform the specified action within the body of the trigger.
Thanks!
Given the choice between your expression and the set, I would choose the CTE. I find that using set with a subquery somewhat icky. After all, the following does the same thing:
SELECT TOP 1 #viewtime = viewtime
FROM History
WHERE user_id = #user_id AND video_id = #video_id
ORDER BY viewtime ASC;
In other words, the set is redundant.
In addition, separating the set from the delete introduces an opportunity for a race condition. Perhaps another query might insert a row or delete the one you are trying to delete.
As for the CTE itself, it is okay. You need the CTE (or subquery) if you are going to delete rows in a particular order.

How to prevent updates to a table, with an exception for one situation

I have a table that contains records that can become part of a bill. I can tell which ones are already part of a bill because the table has a BillId column that gets updated by the application code when that happens. I want to prevent updates to any record that has a non-null BillId. I'm thinking that the following should take care of that:
CREATE TRIGGER [Item_Update_AnyBilled]
ON [dbo].[Item]
FOR UPDATE
AS
BEGIN
SET NOCOUNT ON;
DECLARE #AnyBilled BIT;
SELECT TOP(1) #AnyBilled = 1
FROM inserted i
JOIN deleted d ON i.ItemId = d.ItemId
WHERE d.BillId IS NOT NULL;
IF COALESCE(#AnyBilled, 0) = 1 BEGIN
RAISERROR(2870486, 16, 1); -- Cannot update a record that is part of a bill.
ROLLBACK TRANSACTION;
END;
END;
However, there is one more wrinkle. The Item table also has a DATETIME Modified column, and a trigger that updates it.
CREATE TRIGGER [dbo].Item_Update_Modified
ON [dbo].[Item]
AFTER UPDATE
AS
BEGIN
SET NOCOUNT ON;
UPDATE a
SET Modified = getdate()
FROM Item a JOIN inserted i ON i.ItemId = a.ItemId
END
With these triggers in place, adding an Item to a Bill always causes the RAISERROR to fire. Presumably because when the BillId is populated, Item_Update_AnyBilled lets it through because the deleted.BillId is NULL, but the Item_Update_Modified then gets executed, and that secondary change causes Item_Update_AnyBilled to get executed again, and this time deleted.BillId is no longer NULL.
How can I prevent updates to the Item table except in the case where the BillId is being populated or when the only change is to the Modified column?
I'd prefer a solution that didn't require me to compare the inserted and deleted values of every column (or use COLUMNS_UPDATED()) as that would create a maintenance issue (someone would have to remember to update the trigger any time a new column is added to or deleted from the table). I am using SQL Server 2005.
Why not use an INSTEAD OF trigger? It requires a bit more work (namely a repeated UPDATE statement) but any time you can prevent work, instead of letting it happen and then rolling it back, you're going to be better off.
CREATE TRIGGER [dbo].[Item_BeforeUpdate_AnyBilled]
ON [dbo].[Item]
INSTEAD OF UPDATE
AS
BEGIN
SET NOCOUNT ON;
IF EXISTS
(
SELECT 1 FROM inserted i
JOIN deleted AS d ON i.ItemId = d.ItemId
WHERE d.BillId IS NULL -- it was NULL before, may not be NULL now
)
BEGIN
UPDATE src
SET col1 = i.col1 --, ... other columns
ModifiedDate = CURRENT_TIMESTAMP -- this eliminates need for other trigger
FROM dbo.Item AS src
INNER JOIN inserted AS i
ON i.ItemId = src.ItemId
AND (criteria to determine if at least one column has changed);
END
ELSE
BEGIN
RAISERROR(...);
END
END
GO
This doesn't fit perfectly. The criteria I've left out is left out for a reason: it can be complex to determine if a column value has changed, as it depends on the datatype, whether the column can be NULL, etc. AFAIK the built-in trigger functions can only tell if a certain column was specified, not whether the value actually changed from before.
EDIT considering that you're only concerned about the other columns that are updated due to the after trigger, I think the following INSTEAD OF trigger can replace both of your existing triggers and also deal with multiple rows updated at once (some without meeting your criteria):
CREATE TRIGGER [dbo].[Item_BeforeUpdate_AnyBilled]
ON [dbo].[Item]
INSTEAD OF UPDATE
AS
BEGIN
SET NOCOUNT ON;
UPDATE src SET col1 = i.col1 --, ... other columns,
ModifiedDate = CURRENT_TIMESTAMP
FROM dbo.Item AS src
INNER JOIN inserted AS i
ON src.ItemID = i.ItemID
INNER JOIN deleted AS d
ON i.ItemID = d.ItemID
WHERE d.BillID IS NULL;
IF ##ROWCOUNT = 0
BEGIN
RAISERROR(...);
END
END
GO

SQL - Inserting and Updating Multiple Records at Once

I have a stored procedure that is responsible for inserting or updating multiple records at once. I want to perform this in my stored procedure for the sake of performance.
This stored procedure takes in a comma-delimited list of permit IDs and a status. The permit IDs are stored in a variable called #PermitIDs. The status is stored in a variable called #Status. I have a user-defined function that converts this comma-delimited list of permit IDs into a Table. I need to go through each of these IDs and do either an insert or update into a table called PermitStatus.
If a record with the permit ID does not exist, I want to add a record. If it does exist, I'm want to update the record with the given #Status value. I know how to do this for a single ID, but I do not know how to do it for multiple IDs. For single IDs, I do the following:
-- Determine whether to add or edit the PermitStatus
DECLARE #count int
SET #count = (SELECT Count(ID) FROM PermitStatus WHERE [PermitID]=#PermitID)
-- If no records were found, insert the record, otherwise add
IF #count = 0
BEGIN
INSERT INTO
PermitStatus
(
[PermitID],
[UpdatedOn],
[Status]
)
VALUES
(
#PermitID,
GETUTCDATE(),
1
)
END
ELSE
UPDATE
PermitStatus
SET
[UpdatedOn]=GETUTCDATE(),
[Status]=#Status
WHERE
[PermitID]=#PermitID
How do I loop through the records in the Table returned by my user-defined function to dynamically insert or update the records as needed?
create a split function, and use it like:
SELECT
*
FROM YourTable y
INNER JOIN dbo.splitFunction(#Parameter) s ON y.ID=s.Value
I prefer the number table approach
For this method to work, you need to do this one time table setup:
SELECT TOP 10000 IDENTITY(int,1,1) AS Number
INTO Numbers
FROM sys.objects s1
CROSS JOIN sys.objects s2
ALTER TABLE Numbers ADD CONSTRAINT PK_Numbers PRIMARY KEY CLUSTERED (Number)
Once the Numbers table is set up, create this function:
CREATE FUNCTION [dbo].[FN_ListToTableAll]
(
#SplitOn char(1) --REQUIRED, the character to split the #List string on
,#List varchar(8000)--REQUIRED, the list to split apart
)
RETURNS TABLE
AS
RETURN
(
----------------
--SINGLE QUERY-- --this WILL return empty rows
----------------
SELECT
ROW_NUMBER() OVER(ORDER BY number) AS RowNumber
,LTRIM(RTRIM(SUBSTRING(ListValue, number+1, CHARINDEX(#SplitOn, ListValue, number+1)-number - 1))) AS ListValue
FROM (
SELECT #SplitOn + #List + #SplitOn AS ListValue
) AS InnerQuery
INNER JOIN Numbers n ON n.Number < LEN(InnerQuery.ListValue)
WHERE SUBSTRING(ListValue, number, 1) = #SplitOn
);
GO
You can now easily split a CSV string into a table and join on it:
select * from dbo.FN_ListToTableAll(',','1,2,3,,,4,5,6777,,,')
OUTPUT:
RowNumber ListValue
----------- ----------
1 1
2 2
3 3
4
5
6 4
7 5
8 6777
9
10
11
(11 row(s) affected)
To make what you need work, do the following:
--this would be the existing table
DECLARE #OldData table (RowID int, RowStatus char(1))
INSERT INTO #OldData VALUES (10,'z')
INSERT INTO #OldData VALUES (20,'z')
INSERT INTO #OldData VALUES (30,'z')
INSERT INTO #OldData VALUES (70,'z')
INSERT INTO #OldData VALUES (80,'z')
INSERT INTO #OldData VALUES (90,'z')
--these would be the stored procedure input parameters
DECLARE #IDList varchar(500)
,#StatusList varchar(500)
SELECT #IDList='10,20,30,40,50,60'
,#StatusList='A,B,C,D,E,F'
--stored procedure local variable
DECLARE #InputList table (RowID int, RowStatus char(1))
--convert input prameters into a table
INSERT INTO #InputList
(RowID,RowStatus)
SELECT
i.ListValue,s.ListValue
FROM dbo.FN_ListToTableAll(',',#IDList) i
INNER JOIN dbo.FN_ListToTableAll(',',#StatusList) s ON i.RowNumber=s.RowNumber
--update all old existing rows
UPDATE o
SET RowStatus=i.RowStatus
FROM #OldData o WITH (UPDLOCK, HOLDLOCK) --to avoid race condition when there is high concurrency as per #emtucifor
INNER JOIN #InputList i ON o.RowID=i.RowID
--insert only the new rows
INSERT INTO #OldData
(RowID, RowStatus)
SELECT
i.RowID, i.RowStatus
FROM #InputList i
LEFT OUTER JOIN #OldData o ON i.RowID=o.RowID
WHERE o.RowID IS NULL
--display the old table
SELECT * FROM #OldData order BY RowID
OUTPUT:
RowID RowStatus
----------- ---------
10 A
20 B
30 C
40 D
50 E
60 F
70 z
80 z
90 z
(9 row(s) affected)
EDIT thanks to #Emtucifor click here for the tip about the race condition, I have included the locking hints in my answer, to prevent race condition problems when there is high concurrency.
There are various methods to accomplish the parts you ask are asking about.
Passing Values
There are dozens of ways to do this. Here are a few ideas to get you started:
Pass in a string of identifiers and parse it into a table, then join.
SQL 2008: Join to a table-valued parameter
Expect data to exist in a predefined temp table and join to it
Use a session-keyed permanent table
Put the code in a trigger and join to the INSERTED and DELETED tables in it.
Erland Sommarskog provides a wonderful comprehensive discussion of lists in sql server. In my opinion, the table-valued parameter in SQL 2008 is the most elegant solution for this.
Upsert/Merge
Perform a separate UPDATE and INSERT (two queries, one for each set, not row-by-row).
SQL 2008: MERGE.
An Important Gotcha
However, one thing that no one else has mentioned is that almost all upsert code, including SQL 2008 MERGE, suffers from race condition problems when there is high concurrency. Unless you use HOLDLOCK and other locking hints depending on what's being done, you will eventually run into conflicts. So you either need to lock, or respond to errors appropriately (some systems with huge transactions per second have used the error-response method successfully, instead of using locks).
One thing to realize is that different combinations of lock hints implicitly change the transaction isolation level, which affects what type of locks are acquired. This changes everything: which other locks are granted (such as a simple read), the timing of when a lock is escalated to update from update intent, and so on.
I strongly encourage you to read more detail on these race condition problems. You need to get this right.
Conditional Insert/Update Race Condition
“UPSERT” Race Condition With MERGE
Example Code
CREATE PROCEDURE dbo.PermitStatusUpdate
#PermitIDs varchar(8000), -- or (max)
#Status int
AS
SET NOCOUNT, XACT_ABORT ON -- see note below
BEGIN TRAN
DECLARE #Permits TABLE (
PermitID int NOT NULL PRIMARY KEY CLUSTERED
)
INSERT #Permits
SELECT Value FROM dbo.Split(#PermitIDs) -- split function of your choice
UPDATE S
SET
UpdatedOn = GETUTCDATE(),
Status = #Status
FROM
PermitStatus S WITH (UPDLOCK, HOLDLOCK)
INNER JOIN #Permits P ON S.PermitID = P.PermitID
INSERT PermitStatus (
PermitID,
UpdatedOn,
Status
)
SELECT
P.PermitID,
GetUTCDate(),
#Status
FROM #Permits P
WHERE NOT EXISTS (
SELECT 1
FROM PermitStatus S
WHERE P.PermitID = S.PermitID
)
COMMIT TRAN
RETURN ##ERROR;
Note: XACT_ABORT helps guarantee the explicit transaction is closed following a timeout or unexpected error.
To confirm that this handles the locking problem, open several query windows and execute an identical batch like so:
WAITFOR TIME '11:00:00' -- use a time in the near future
EXEC dbo.PermitStatusUpdate #PermitIDs = '123,124,125,126', 1
All of these different sessions will execute the stored procedure in nearly the same instant. Check each session for errors. If none exist, try the same test a few times more (since it's possible to not always have the race condition occur, especially with MERGE).
The writeups at the links I gave above give even more detail than I did here, and also describe what to do for the SQL 2008 MERGE statement as well. Please read those thoroughly to truly understand the issue.
Briefly, with MERGE, no explicit transaction is needed, but you do need to use SET XACT_ABORT ON and use a locking hint:
SET NOCOUNT, XACT_ABORT ON;
MERGE dbo.Table WITH (HOLDLOCK) AS TableAlias
...
This will prevent concurrency race conditions causing errors.
I also recommend that you do error handling after each data modification statement.
If you're using SQL Server 2008, you can use table valued parameters - you pass in a table of records into a stored procedure and then you can do a MERGE.
Passing in a table valued parameter would remove the need to parse CSV strings.
Edit:
ErikE has raised the point about race conditions, please refer to his answer and linked articles.
If you have SQL Server 2008, you can use MERGE. Here's an article describing this.
You should be able to do your insert and your update as two set based queries.
The code below was based on a data load procedure that I wrote a while ago that took data from a staging table and inserted or updated it into the main table.
I've tried to make it match your example, but you may need to tweak this (and create a table valued UDF to parse your CSV into a table of ids).
-- Update where the join on permitstatus matches
Update
PermitStatus
Set
[UpdatedOn]=GETUTCDATE(),
[Status]=staging.Status
From
PermitStatus status
Join
StagingTable staging
On
staging.PermitId = status.PermitId
-- Insert the new records, based on the Where Not Exists
Insert
PermitStatus(Updatedon, Status, PermitId)
Select (GETUTCDATE(), staging.status, staging.permitId
From
StagingTable staging
Where Not Exists
(
Select 1 from PermitStatus status
Where status.PermitId = staging.PermidId
)
Essentially you have an upsert stored procedure (eg. UpsertSinglePermit)
(like the code you have given above) for dealing with one row.
So the steps I see are to create a new stored procedure (UpsertNPermits) which does
a) Parse input string into n record entries (each record contains permit id and status)
b) Foreach entry in above, invoke UpsertSinglePermit

Efficient transaction, record locking

I've got a stored procedure, which selects 1 record back. the stored procedure could be called from several different applications on different PCs. The idea is that the stored procedure brings back the next record that needs to be processed, and if two applications call the stored proc at the same time, the same record should not be brought back. My query is below, I'm trying to write the query as efficiently as possible (sql 2008). Can it get done more efficiently than this?
CREATE PROCEDURE GetNextUnprocessedRecord
AS
BEGIN
SET NOCOUNT ON;
--ID of record we want to select back
DECLARE #iID BIGINT
-- Find the next processable record, and mark it as dispatched
-- Must be done in a transaction to ensure no other query can get
-- this record between the read and update
BEGIN TRAN
SELECT TOP 1
#iID = [ID]
FROM
--Don't read locked records, only lock the specific record
[MyRecords] WITH (READPAST, ROWLOCK)
WHERE
[Dispatched] is null
ORDER BY
[Received]
--Mark record as picked up for processing
UPDATE
[MyRecords]
SET
[Dispatched] = GETDATE()
WHERE
[ID] = #iID
COMMIT TRAN
--Select back the specific record
SELECT
[ID],
[Data]
FROM
[MyRecords] WITH (NOLOCK, READPAST)
WHERE
[ID] = #iID
END
Using the READPAST locking hint is correct and your SQL looks OK.
I'd add use XLOCK though which is also HOLDLOCK/SERIALIZABLE
...
[MyRecords] WITH (READPAST, ROWLOCK, XLOCK)
...
This means you get the ID, and exclusively lock that row while you carry on and update it.
Edit: add an index on Dispatched and Received columns to make it quicker. If [ID] (I assume it's the PK) is not clustered, INCLUDE [ID]. And filter the index too because it's SQL 2008
You could also use this construct which does it all in one go without XLOCK or HOLDLOCK
UPDATE
MyRecords
SET
--record the row ID
#id = [ID],
--flag doing stuff
[Dispatched] = GETDATE()
WHERE
[ID] = (SELECT TOP 1 [ID] FROM MyRecords WITH (ROWLOCK, READPAST) WHERE Dispatched IS NULL ORDER BY Received)
UPDATE, assign, set in one
You can assign each picker process a unique id, and add columns pickerproc and pickstate to your records. Then
UPDATE MyRecords
SET pickerproc = myproc,
pickstate = 'I' -- for 'I'n process
WHERE Id = (SELECT MAX(Id) FROM MyRecords WHERE pickstate = 'A') -- 'A'vailable
That gets you your record in one atomic step, and you can do the rest of your processing at your leisure. Then you can set pickstate to 'C'omplete', 'E'rror, or whatever when it's resolved.
I think Mitch is referring to another good technique where you create a message-queue table and insert the Ids there. There are several SO threads - search for 'message queue table'.
You can keep MyRecords on a "MEMORY" table for faster processing.

Resources