SQL Server SELECT/UPDATE Stored Procedure Weirdness - sql-server

I have a table I'm using as a work queue. Essentially, it consists of a primary key, a piece of data, and a status flag (processed/unprocessed). I have multiple processes trying to grab the next unprocessed row, so I need to make sure that they observe proper lock and update semantics to avoid race condition nastiness. To that end, I've defined a stored procedure they can call:
CREATE PROCEDURE get_from_q
AS
DECLARE #queueid INT;
BEGIN TRANSACTION TRAN1;
SELECT TOP 1
#queueid = id
FROM
MSG_Q WITH (updlock, readpast)
WHERE
MSG_Q.status=0;
SELECT TOP 1 *
FROM
MSG_Q
WHERE
MSG_Q.id=#queueid;
UPDATE MSG_Q
SET status=1
WHERE id=#queueid;
COMMIT TRANSACTION TRAN1;
Note the use of "WITH (updlock, readpast)" to make sure that I lock the target row and ignore rows that are similarly locked already.
Now, the procedure works as listed above, which is great. While I was putting this together, however, I found that if the second SELECT and the UPDATE are reversed in order (i.e. UPDATE first then SELECT), I got no data back at all. And no, it didn't matter whether the second SELECT was before or after the final COMMIT.
My question is thus why the order of the second SELECT and UPDATE makes a difference. I suspect that there is something subtle going on there that I don't understand, and I'm worried that it's going to bite me later on.
Any hints?

by default transactions are READ COMMITTED :
"Specifies that shared locks are held while the data is being read to avoid dirty reads, but the data can be changed before the end of the transaction, resulting in nonrepeatable reads or phantom data. This option is the SQL Server default."
http://msdn.microsoft.com/en-us/library/aa259216.aspx
I think you are getting nothing in the select because the record is still marked as dirty. You'd have to change the transaction isolation level OR, what I do is do the update first and then read the record, but to do this you have to flag the record w/ a unique value (I use a getdate() for batchs but a GUID would be what you probably want to use).

Although not directly answering your question here, rather than reinventing the wheel and making life difficult for yourself, unless you enjoy it of course ;-), may I suggest that you look at using SQL Server Service Broker.
It provides an existing framework for using queues etc.
To find out more visit.
Service Broker Link
Now back to the question, I am not able to replicate your problem, as you will see if you execute the code below, data is returned regardless of the order os the select/update statement.
So your example above then.
create table #MSG_Q
(id int identity(1,1) primary key,status int)
insert into #MSG_Q select 0
DECLARE #queueid INT
BEGIN TRANSACTION TRAN1
SELECT TOP 1 #queueid = id FROM #MSG_Q WITH (updlock, readpast) WHERE #MSG_Q.status=0
UPDATE #MSG_Q SET status=1 WHERE id=#queueid
SELECT TOP 1 * FROM #MSG_Q WHERE #MSG_Q.id=#queueid
COMMIT TRANSACTION TRAN1
select * from #MSG_Q
drop table #MSG_Q
Returns the Results (1,1) and (1,1)
Now swapping the statement order.
create table #MSG_Q
(id int identity(1,1) primary key,status int)
insert into #MSG_Q select 0
DECLARE #queueid INT
BEGIN TRANSACTION TRAN1
SELECT TOP 1 #queueid = id FROM #MSG_Q WITH (updlock, readpast) WHERE #MSG_Q.status=0
SELECT TOP 1 * FROM #MSG_Q WHERE #MSG_Q.id=#queueid
UPDATE #MSG_Q SET status=1 WHERE id=#queueid
COMMIT TRANSACTION TRAN1
select * from #MSG_Q
drop table #MSG_Q
Results in: (1,0), (1,1) as expected.
Perhaps you could qualify your issue further?

More experimentation leads me to conclude that I was chasing a red herring, brought about by the tools I was using to exec my stored procedure. I was initially using DBVisualizer (free edition) and Netbeans, and they both appear to be confused by something about the format of the results. DBVisualizer suggests that I'm getting multiple result sets back, and that the free edition doesn't handle that.
Since then, I grabbed the free MS SQL Server Management Studio Express and things work perfectly. For those interested, the URL to SMSE is here:
MS SQL Server SMSE
Don't forget to install the MSXML6 service pack, too:
MSXML Service Pack 1
So, totally my bad in this case. :-(
Major thanks and kudos to you guys for your answers though. You helped me confirm that what I was doing should work, which lead me to the change I had to make to actually "solve" the issue. Thanks ever so much!

One more point-- including a "SET NOCOUNT ON" in the stored procedure fixed things for all ODBC clients. Apparently the rowcounts for the first select was confusing the ODBC clients, and telling SQL Server to not return that value makes things work perfectly...

Related

ROWLOCK in Stored Procedure with Composite Key in SQL Server

EDITED: I have a table with composite key which is being used by multiple windows services deployed on multiple servers.
Columns:
UserId (int) [CompositeKey],
CheckinTimestamp (bigint) [CompositeKey],
Status (tinyint)
There will be continuous insertion in this table. I want my windows service to select top 10000 rows and do some processing while locking those 10000 rows only. I am using ROWLOCK for this using below stored procedure:
ALTER PROCEDURE LockMonitoringSession
AS
BEGIN
BEGIN TRANSACTION
SELECT TOP 10000 * INTO #TempMonitoringSession FROM dbo.MonitoringSession WITH (ROWLOCK) WHERE [Status] = 0 ORDER BY UserId
DECLARE #UserId INT
DECLARE #CheckinTimestamp BIGINT
DECLARE SessionCursor CURSOR FOR SELECT UserId, CheckinTimestamp FROM #TempMonitoringSession
OPEN SessionCursor
FETCH NEXT FROM SessionCursor INTO #UserId, #CheckinTimestamp
WHILE ##FETCH_STATUS = 0
BEGIN
UPDATE dbo.MonitoringSession SET [Status] = 1 WHERE UserId = #UserId AND CheckinTimestamp = #CheckinTimestamp
FETCH NEXT FROM SessionCursor INTO #UserId, #CheckinTimestamp
END
CLOSE SessionCursor
DEALLOCATE SessionCursor
SELECT * FROM #TempMonitoringSession
DROP TABLE #TempMonitoringSession
COMMIT TRANSACTION
END
But by doing so, dbo.MonitoringSession is being locked permanently until the stored procedure ends. I am not sure what I am doing wrong here.
The only purpose of this stored procedure is to select and update 10000 recent rows without any primary key and ensuring that whole table is not locked because multiple windows services are accessing this table.
Thanks in advance for any help.
(not an answer but too long for comment)
The purpose description should be about why/what for are you updating whole table. Your SP is for updating all rows with Status=0 to set Status=1. So when one of your services decides to run this SP - all rows become non-relevant. I mean, logically event which causes status change already occurred, you just need some time to physically change it in the database. So why do you want other services to read non-relevant rows? Ok, probably you need to read rows available to read (not changed) - but it's not clear again because you are updating whole table.
You may use READPAST hint to skip locked rows and you need rowlocks for that.
Ok, but even with processing of top N rows update of those N rows with one statement would be much faster then looping through this number of rows. You are doing same job but manually.
Check out example of combining UPDLOCK + READPAST to process same queue with parallel processes: https://www.mssqltips.com/sqlservertip/1257/processing-data-queues-in-sql-server-with-readpast-and-updlock/
Small hint - CURSOR STATIC, READONLY, FORWARD_ONLY would do same thing as storing to temp table. Review STATIC option:
https://msdn.microsoft.com/en-us/library/ms180169.aspx
Another thing is a suggestion to think of RCSI. This will avoid other services locking for sure but this is a db-level option so you'll have to test all your functionality. Most of it will work same as before but some scenarios need testing (concurrent transactions won't be locked in situations where they were locked before).
Not clear to me:
what is the percentage of 10000 out of the total number of rows?
is there a clustered index or this is a heap?
what is actual execution plan for select and update?
what are concurrent transactions: inserts or selects?
by the way discovered similar question:
why the entire table is locked while "with (rowlock)" is used in an update statement

How to refresh a full text index from within a SQL Server transaction

I am having an issue similar to the one described here:
How do I force a refresh of a fulltext index within a transaction in mssql?
However, the recommended solution posted there does not work. I tried posting a follow up to the same thread, but it was deleted by a moderator. So I am starting a new question.
Similar to the original query I am also attempting to implement a unit test within a transaction. I would like to insert data into a full text indexed column, query the data to check its validity, and then roll back the insert afterward.
The problem is that the index does not seem to update until after I have committed the transaction. I have tried "WAITFOR DELAY" approach, but no matter how long I wait the index does not update until after the transaction is committed.
Here's a sample of what I'm trying to do:
BEGIN TRAN
INSERT INTO AMMS.Content
(
ContentTypeId,
Name,
ImportDate,
IsDeleted,
LastModifiedBy,
LastModifiedAt,
DisplayInPortal,
StatusId
)
VALUES
(
4,
'my unit test content',
GETUTCDATE(),
0,
1,
GETUTCDATE(),
1,
2
)
declare #count int
set #count = 0
while #count < 10
begin
SELECT FULLTEXTCATALOGPROPERTY('PRIMARY', 'PopulateStatus') AS Status
select * from amms.Content where contains(Name, 'unit')
waitfor delay '00:00:01'
set #count = #count + 1
end
The populate status stays at 9 and the select returns no rows as long as the transaction is pending. Once I commit the populate status returns 0 and the select returns a single row as expected.
Am I missing something? Is there another way to accomplish this? Is this behavior different under different versions of SQL Server? (I'm currently testing using 2008)
I think you're out of luck. Trying to start a manual refresh inside a transaction...
begin tran
alter fulltext index on dbo.FTS_Table start full population
...gives me this message:
Msg 574, Level 16, State 0, Line 1 ALTER FULLTEXT INDEX statement
cannot be used inside a user transaction.
What alternatives you have probably depend on how you manage your unit tests, perhaps you can just DELETE the data again, or DROP the table completely if it's only used for this test.
Essentially SQL Server won't let you carry out certain actions in transaction (or in snapshot).
The Indexing service in particular quite deliberately runs externally to user connections due to the risks of contention and the huge overhead text indexing imposes.
If you step back and consider the implications for any other database connections and for the current one of a transaction re-writing the full text indexes I'm sure you'll appreciate that in this case Microsoft have a point.

Why does SQL Server stall after some of the rows have been returned during a select statement?

I have a simple procedure that selects 1000 rows from a table. For the sake of clarity, here's what the stored procedure looks like:
alter procedure dbo.GetDomainForIndexing
#Amount int=1,
#LastID bigint,
#LastFetchDate datetime
as
begin
select top (#Amount) *
from DomainName with(readuncommitted)
where LastUpdated > #LastFetchDate
and ID > #LastID
and ContainsAdultWords is not null
order by ID
end
I've been having issues where it would run this particular procedure fine a bunch of times, the only difference being that a different #LastID value was being passed in each time. As soon as I get to a specific ID though, the procedure will return the first 880 rows almost instantly (this is happening in management studio) and then sit there and literally stall for the next 6 minutes before returning the remaining 120 rows.
What on earth could cause behaviour like this? There are no transactions associated with the connection and there are no connection pool issues. The (readuncommitted) bit does not affect the issue. The issue occurs both from within my application and when I copy the command text into SQL Management Studio for testing; indeed it is there that I discovered this weird stalling behaviour. Initially I was just trying to work out why this procedure would work fine a bunch of times and then suddenly start stalling for no apparent reason.
Any ideas?
UPDATE
The issue also occurs (stalling after 880 rows have been returned) when asking for 883 rows, but not when asking for 882 rows.
UPDATE 2
Selecting from sys.sysprocesses and sys.dm_exec_requests indicates a lastwaittype of PAGEIOLATCH_SH. What should I do?
Quite likely "parameter sniffin"g and an suboptimal cached plan for the offending #LastID value
Try this:
DECLARE #ILastID bigint
SET #ILastID = #LastID
select top (#Amount) *
from DomainName with(readuncommitted)
where LastUpdated > #LastFetchDate and ID > #ILastID and ContainsAdultWords is not null
order by ID
Another option:
What does sysprocesses say as LastWaittype? ASYNCH_NETWORK_IO?
If so, then the client can't deal with the SQL Server output
Have you tried rebuilding the indexes on DomainName?

SQL Server "AFTER INSERT" trigger doesn't see the just-inserted row

Consider this trigger:
ALTER TRIGGER myTrigger
ON someTable
AFTER INSERT
AS BEGIN
DELETE FROM someTable
WHERE ISNUMERIC(someField) = 1
END
I've got a table, someTable, and I'm trying to prevent people from inserting bad records. For the purpose of this question, a bad record has a field "someField" that is all numeric.
Of course, the right way to do this is NOT with a trigger, but I don't control the source code... just the SQL database. So I can't really prevent the insertion of the bad row, but I can delete it right away, which is good enough for my needs.
The trigger works, with one problem... when it fires, it never seems to delete the just-inserted bad record... it deletes any OLD bad records, but it doesn't delete the just-inserted bad record. So there's often one bad record floating around that isn't deleted until somebody else comes along and does another INSERT.
Is this a problem in my understanding of triggers? Are newly-inserted rows not yet committed while the trigger is running?
Triggers cannot modify the changed data (Inserted or Deleted) otherwise you could get infinite recursion as the changes invoked the trigger again. One option would be for the trigger to roll back the transaction.
Edit: The reason for this is that the standard for SQL is that inserted and deleted rows cannot be modified by the trigger. The underlying reason for is that the modifications could cause infinite recursion. In the general case, this evaluation could involve multiple triggers in a mutually recursive cascade. Having a system intelligently decide whether to allow such updates is computationally intractable, essentially a variation on the halting problem.
The accepted solution to this is not to permit the trigger to alter the changing data, although it can roll back the transaction.
create table Foo (
FooID int
,SomeField varchar (10)
)
go
create trigger FooInsert
on Foo after insert as
begin
delete inserted
where isnumeric (SomeField) = 1
end
go
Msg 286, Level 16, State 1, Procedure FooInsert, Line 5
The logical tables INSERTED and DELETED cannot be updated.
Something like this will roll back the transaction.
create table Foo (
FooID int
,SomeField varchar (10)
)
go
create trigger FooInsert
on Foo for insert as
if exists (
select 1
from inserted
where isnumeric (SomeField) = 1) begin
rollback transaction
end
go
insert Foo values (1, '1')
Msg 3609, Level 16, State 1, Line 1
The transaction ended in the trigger. The batch has been aborted.
You can reverse the logic. Instead of deleting an invalid row after it has been inserted, write an INSTEAD OF trigger to insert only if you verify the row is valid.
CREATE TRIGGER mytrigger ON sometable
INSTEAD OF INSERT
AS BEGIN
DECLARE #isnum TINYINT;
SELECT #isnum = ISNUMERIC(somefield) FROM inserted;
IF (#isnum = 1)
INSERT INTO sometable SELECT * FROM inserted;
ELSE
RAISERROR('somefield must be numeric', 16, 1)
WITH SETERROR;
END
If your application doesn't want to handle errors (as Joel says is the case in his app), then don't RAISERROR. Just make the trigger silently not do an insert that isn't valid.
I ran this on SQL Server Express 2005 and it works. Note that INSTEAD OF triggers do not cause recursion if you insert into the same table for which the trigger is defined.
Here's my modified version of Bill's code:
CREATE TRIGGER mytrigger ON sometable
INSTEAD OF INSERT
AS BEGIN
INSERT INTO sometable SELECT * FROM inserted WHERE ISNUMERIC(somefield) = 1 FROM inserted;
INSERT INTO sometableRejects SELECT * FROM inserted WHERE ISNUMERIC(somefield) = 0 FROM inserted;
END
This lets the insert always succeed, and any bogus records get thrown into your sometableRejects where you can handle them later. It's important to make your rejects table use nvarchar fields for everything - not ints, tinyints, etc - because if they're getting rejected, it's because the data isn't what you expected it to be.
This also solves the multiple-record insert problem, which will cause Bill's trigger to fail. If you insert ten records simultaneously (like if you do a select-insert-into) and just one of them is bogus, Bill's trigger would have flagged all of them as bad. This handles any number of good and bad records.
I used this trick on a data warehousing project where the inserting application had no idea whether the business logic was any good, and we did the business logic in triggers instead. Truly nasty for performance, but if you can't let the insert fail, it does work.
I think you can use CHECK constraint - it is exactly what it was invented for.
ALTER TABLE someTable
ADD CONSTRAINT someField_check CHECK (ISNUMERIC(someField) = 1) ;
My previous answer (also right by may be a bit overkill):
I think the right way is to use INSTEAD OF trigger to prevent the wrong data from being inserted (rather than deleting it post-factum)
UPDATE: DELETE from a trigger works on both MSSql 7 and MSSql 2008.
I'm no relational guru, nor a SQL standards wonk. However - contrary to the accepted answer - MSSQL deals just fine with both recursive and nested trigger evaluation. I don't know about other RDBMSs.
The relevant options are 'recursive triggers' and 'nested triggers'. Nested triggers are limited to 32 levels, and default to 1. Recursive triggers are off by default, and there's no talk of a limit - but frankly, I've never turned them on, so I don't know what happens with the inevitable stack overflow. I suspect MSSQL would just kill your spid (or there is a recursive limit).
Of course, that just shows that the accepted answer has the wrong reason, not that it's incorrect. However, prior to INSTEAD OF triggers, I recall writing ON INSERT triggers that would merrily UPDATE the just inserted rows. This all worked fine, and as expected.
A quick test of DELETEing the just inserted row also works:
CREATE TABLE Test ( Id int IDENTITY(1,1), Column1 varchar(10) )
GO
CREATE TRIGGER trTest ON Test
FOR INSERT
AS
SET NOCOUNT ON
DELETE FROM Test WHERE Column1 = 'ABCDEF'
GO
INSERT INTO Test (Column1) VALUES ('ABCDEF')
--SCOPE_IDENTITY() should be the same, but doesn't exist in SQL 7
PRINT ##IDENTITY --Will print 1. Run it again, and it'll print 2, 3, etc.
GO
SELECT * FROM Test --No rows
GO
You have something else going on here.
From the CREATE TRIGGER documentation:
deleted and inserted are logical (conceptual) tables. They are
structurally similar to the table on
which the trigger is defined, that is,
the table on which the user action is
attempted, and hold the old values or
new values of the rows that may be
changed by the user action. For
example, to retrieve all values in the
deleted table, use: SELECT * FROM deleted
So that at least gives you a way of seeing the new data.
I can't see anything in the docs which specifies that you won't see the inserted data when querying the normal table though...
I found this reference:
create trigger myTrigger
on SomeTable
for insert
as
if (select count(*)
from SomeTable, inserted
where IsNumeric(SomeField) = 1) <> 0
/* Cancel the insert and print a message.*/
begin
rollback transaction
print "You can't do that!"
end
/* Otherwise, allow it. */
else
print "Added successfully."
I haven't tested it, but logically it looks like it should dp what you're after...rather than deleting the inserted data, prevent the insertion completely, thus not requiring you to have to undo the insert. It should perform better and should therefore ultimately handle a higher load with more ease.
Edit: Of course, there is the potential that if the insert happened inside of an otherwise valid transaction that the wole transaction could be rolled back so you would need to take that scenario into account and determine if the insertion of an invalid data row would constitute a completely invalid transaction...
Is it possible the INSERT is valid, but that a separate UPDATE is done afterwards that is invalid but wouldn't fire the trigger?
The techniques outlined above describe your options pretty well. But what are the users seeing? I can't imagine how a basic conflict like this between you and whoever is responsible for the software can't end up in confusion and antagonism with the users.
I'd do everything I could to find some other way out of the impasse - because other people could easily see any change you make as escalating the problem.
EDIT:
I'll score my first "undelete" and admit to posting the above when this question first appeared. I of course chickened out when I saw that it was from JOEL SPOLSKY. But it looks like it landed somewhere near. Don't need votes, but I'll put it on the record.
IME, triggers are so seldom the right answer for anything other than fine-grained integrity constraints outside the realm of business rules.
MS-SQL has a setting to prevent recursive trigger firing. This is confirgured via the sp_configure stored proceedure, where you can turn recursive or nested triggers on or off.
In this case, it would be possible, if you turn off recursive triggers to link the record from the inserted table via the primary key, and make changes to the record.
In the specific case in the question, it is not really a problem, because the result is to delete the record, which won't refire this particular trigger, but in general that could be a valid approach. We implemented optimistic concurrency this way.
The code for your trigger that could be used in this way would be:
ALTER TRIGGER myTrigger
ON someTable
AFTER INSERT
AS BEGIN
DELETE FROM someTable
INNER JOIN inserted on inserted.primarykey = someTable.primarykey
WHERE ISNUMERIC(inserted.someField) = 1
END
Your "trigger" is doing something that a "trigger" is not suppose to be doing. You can simple have your Sql Server Agent run
DELETE FROM someTable
WHERE ISNUMERIC(someField) = 1
every 1 second or so. While you're at it, how about writing a nice little SP to stop the programming folk from inserting errors into your table. One good thing about SP's is that the parameters are type safe.
I stumbled across this question looking for details on the sequence of events during an insert statement & trigger. I ended up coding some brief tests to confirm how SQL 2016 (EXPRESS) behaves - and thought it would be appropriate to share as it might help others searching for similar information.
Based on my test, it is possible to select data from the "inserted" table and use that to update the inserted data itself. And, of interest to me, the inserted data is not visible to other queries until the trigger completes at which point the final result is visible (at least best as I could test). I didn't test this for recursive triggers, etc. (I would expect the nested trigger would have full visibility of the inserted data in the table, but that's just a guess).
For example - assuming we have the table "table" with an integer field "field" and primary key field "pk" and the following code in our insert trigger:
select #value=field,#pk=pk from inserted
update table set field=#value+1 where pk=#pk
waitfor delay '00:00:15'
We insert a row with the value 1 for "field", then the row will end up with the value 2. Furthermore - if I open another window in SSMS and try:
select * from table where pk = #pk
where #pk is the primary key I originally inserted, the query will be empty until the 15 seconds expire and will then show the updated value (field=2).
I was interested in what data is visible to other queries while the trigger is executing (apparently no new data). I tested with an added delete as well:
select #value=field,#pk=pk from inserted
update table set field=#value+1 where pk=#pk
delete from table where pk=#pk
waitfor delay '00:00:15'
Again, the insert took 15sec to execute. A query executing in a different session showed no new data - during or after execution of the insert + trigger (although I would expect any identity would increment even if no data appears to be inserted).

How do you lock tables in SQL Server 2005, and should I even do it?

This one will take some explaining. What I've done is create a specific custom message queue in SQL Server 2005. I have a table with messages that contain timestamps for both acknowledgment and completion. The stored procedure that callers execute to obtain the next message in their queue also acknowledges the message. So far so good. Well, if the system is experiencing a massive amount of transactions (thousands per minute), isn't it possible for a message to be acknowledged by another execution of the stored procedure while another is prepared to so itself? Let me help by showing my SQL code in the stored proc:
--Grab the next message id
declare #MessageId uniqueidentifier
set #MessageId = (select top(1) ActionMessageId from UnacknowledgedDemands);
--Acknowledge the message
update ActionMessages
set AcknowledgedTime = getdate()
where ActionMessageId = #MessageId
--Select the entire message
...
...
In the above code, couldn't another stored procedure running at the same time obtain the same id and attempt to acknowledge it at the same time? Could I (or should I) implement some sort of locking to prevent another stored proc from acknowledging messages that another stored proc is querying?
Wow, did any of this even make sense? It's a bit difficult to put to words...
Something like this
--Grab the next message id
begin tran
declare #MessageId uniqueidentifier
select top 1 #MessageId = ActionMessageId from UnacknowledgedDemands with(holdlock, updlock);
--Acknowledge the message
update ActionMessages
set AcknowledgedTime = getdate()
where ActionMessageId = #MessageId
-- some error checking
commit tran
--Select the entire message
...
...
This seems like the kind of situation where OUTPUT can be useful:
-- Acknowledge and grab the next message
declare #message table (
-- ...your `ActionMessages` columns here...
)
update ActionMessages
set AcknowledgedTime = getdate()
output INSERTED.* into #message
where ActionMessageId in (select top(1) ActionMessageId from UnacknowledgedDemands)
and AcknowledgedTime is null
-- Use the data in #message, which will have zero or one rows assuming
-- `ActionMessageId` uniquely identifies a row (strongly implied in your question)
...
...
There, we update and grab the row in the same operation, which tells the query optimizer exactly what we're doing, allowing it to choose the most granular lock it can and maintain it for the briefest possible time. (Although the column prefix is INSERTED, OUTPUT is like triggers, expressed in terms of the UPDATE being like deleting the row and inserting the new one.)
I'd need more information about your ActionMessages and UnacknowledgedDemands tables (views/TVFs/whatever), not to mention a greater knowledge of SQL Server's automatic locking, to say whether that and AcknowledgedTime is null clause is necessary. It's there to defend against a race condition between the sub-select and the update. I'm certain it wouldn't be necessary if we were selecting from ActionMessages itself (e.g., where AcknowledgedTime is null with a top on the update, instead of the sub-select on UnacknowledgedDemands). I expect even if it's unnecessary, it's harmless.
Note that OUTPUT is in SQL Server 2005 and above. That's what you said you were using, but if compatibility with geriatric SQL Server 2000 installs were required, you'd want to go another way.
#Kilhoffer:
The whole SQL batch is parsed before execution, so SQL knows that you're going to do an update to the table as well as select from it.
Edit: Also, SQL will not necessarily lock the whole table - it could just lock the necessary rows. See here for an overview of locking in SQL server.
Instead of explicit locking, which is often escalated by SQL Server to higher granularity than desired, why not just try this approach:
declare #MessageId uniqueidentifier
select top 1 #MessageId = ActionMessageId from UnacknowledgedDemands
update ActionMessages
set AcknowledgedTime = getdate()
where ActionMessageId = #MessageId and AcknowledgedTime is null
if ##rowcount > 0
/* acknoweldge succeeded */
else
/* concurrent query acknowledged message before us,
go back and try another one */
The less you lock - the higher concurrency you have.
Should you really be processing things one-by-one? Shouldn't you just have SQL Server acknowledge all unacknowledged messages with todays date and return them? (all also in a transaction of course)
Read more about SQL Server Select Locking here and here. SQL Server has the ability to invoke a table lock on a select. Nothing will happen to the table during the transaction. When the transaction completes, any inserts or updates will then resolve themselves.
You want to wrap your code in a transaction, then SQL server will handle locking the appropriate rows or tables.
begin transaction
--Grab the next message id
declare #MessageId uniqueidentifier
set #MessageId = (select top(1) ActionMessageId from UnacknowledgedDemands);
--Acknowledge the message
update ActionMessages
set AcknowledgedTime = getdate()
where ActionMessageId = #MessageId
commit transaction
--Select the entire message
...

Resources