When a SQL job starts, what is the value of ##ROWCOUNT? - sql-server

I have been looking for an answer to this question, but I believe it may be a useful piece of information for others as well.
I am working with TSQL in SQL server management studio. Due to the way our system processes information, it's desirable to do updates in smaller batches. A trick we use is to wrap updates in a while loop as such:
while (##Rowcount <> 0)
begin
update top (800) etc etc
end
I have created a job to do this update regularly and while this works in a query window, it does not seem to work in a job. Is the rowcount value populated when a job begins?

##ROWCOUNT is 0 at the beginning of a statement, what is happening for you is that when SSMS first opens up a connection it executes a series of queries behind the scenes (you can capture the specific queries with a trace), so you get a residual value for ##ROWCOUNT of 1.
When doing batch updates like this, I tend to take a slightly different approach:
WHILE 1 = 1
BEGIN
UPDATE TOP (100) ....
SET ...
IF ##ROWCOUNT = 0
BREAK;
END
I don't think this has any benefit whatsoever over doing something like:
SELECT 1;
WHILE ##ROWCOUNT > 0
BEGIN
...
END
And is more long winded, but doing a pointless select or assignment just feels odd to me, perhaps owing to some minor OCD.

##Rowcount is system function with output INT NOT NULL --> default value for int is 0
You can get it by:
if ##rowcount = 0
print 1
else
print 0
But don't try select ##rowcount at first row it's statements that make a simple assignment and is always set the ##ROWCOUNT value to 1. :)
So solution is add select 0 before your while. It works because ##rowcount of select 0 is one row..
select 0
while (##Rowcount <> 0)
begin
update top (800) etc etc
end

Saving the value of ##RowCount in a variable allows you to avoid any assumption about the initial value and any problems with losing the value when another statement is executed.
declare #Samples as Table ( Sample Int );
insert into #Samples ( Sample ) values ( 1 ), ( 2 ), ( 5 );
declare #RowCount as Int;
-- If there is any work to do then initialize #RowCount to 1 , otherwise 0 .
set #RowCount = case when exists ( select 42 from #Samples where Sample < 10 ) then 1 else 0 end;
declare #NewSample as Int, #OldSample as Int;
-- Loop through updating one row at a time.
while #RowCount > 0
begin
update Ph
set #OldSample = Sample, #NewSample = Sample *= 2
from ( select top 1 Sample from #Samples where Sample < 10 order by Sample ) as Ph
set #RowCount = ##RowCount;
-- Do what you will without losing the count of rows updated.
select #RowCount as 'RowsProcessed', #OldSample as 'OldSample', #NewSample as 'NewSample'
end

Related

How to update large table with millions of rows in SQL Server?

I've an UPDATE statement which can update more than million records. I want to update them in batches of 1000 or 10000. I tried with ##ROWCOUNT but I am unable to get desired result.
Just for testing purpose what I did is, I selected table with 14 records and set a row count of 5. This query is supposed to update records in 5, 5 and 4 but it just updates first 5 records.
Query - 1:
SET ROWCOUNT 5
UPDATE TableName
SET Value = 'abc1'
WHERE Parameter1 = 'abc' AND Parameter2 = 123
WHILE ##ROWCOUNT > 0
BEGIN
SET rowcount 5
UPDATE TableName
SET Value = 'abc1'
WHERE Parameter1 = 'abc' AND Parameter2 = 123
PRINT (##ROWCOUNT)
END
SET rowcount 0
Query - 2:
SET ROWCOUNT 5
WHILE (##ROWCOUNT > 0)
BEGIN
BEGIN TRANSACTION
UPDATE TableName
SET Value = 'abc1'
WHERE Parameter1 = 'abc' AND Parameter2 = 123
PRINT (##ROWCOUNT)
IF ##ROWCOUNT = 0
BEGIN
COMMIT TRANSACTION
BREAK
END
COMMIT TRANSACTION
END
SET ROWCOUNT 0
What am I missing here?
You should not be updating 10k rows in a set unless you are certain that the operation is getting Page Locks (due to multiple rows per page being part of the UPDATE operation). The issue is that Lock Escalation (from either Row or Page to Table locks) occurs at 5000 locks. So it is safest to keep it just below 5000, just in case the operation is using Row Locks.
You should not be using SET ROWCOUNT to limit the number of rows that will be modified. There are two issues here:
It has that been deprecated since SQL Server 2005 was released (11 years ago):
Using SET ROWCOUNT will not affect DELETE, INSERT, and UPDATE statements in a future release of SQL Server. Avoid using SET ROWCOUNT with DELETE, INSERT, and UPDATE statements in new development work, and plan to modify applications that currently use it. For a similar behavior, use the TOP syntax
It can affect more than just the statement you are dealing with:
Setting the SET ROWCOUNT option causes most Transact-SQL statements to stop processing when they have been affected by the specified number of rows. This includes triggers. The ROWCOUNT option does not affect dynamic cursors, but it does limit the rowset of keyset and insensitive cursors. This option should be used with caution.
Instead, use the TOP () clause.
There is no purpose in having an explicit transaction here. It complicates the code and you have no handling for a ROLLBACK, which isn't even needed since each statement is its own transaction (i.e. auto-commit).
Assuming you find a reason to keep the explicit transaction, then you do not have a TRY / CATCH structure. Please see my answer on DBA.StackExchange for a TRY / CATCH template that handles transactions:
Are we required to handle Transaction in C# Code as well as in Store procedure
I suspect that the real WHERE clause is not being shown in the example code in the Question, so simply relying upon what has been shown, a better model (please see note below regarding performance) would be:
DECLARE #Rows INT,
#BatchSize INT; -- keep below 5000 to be safe
SET #BatchSize = 2000;
SET #Rows = #BatchSize; -- initialize just to enter the loop
BEGIN TRY
WHILE (#Rows = #BatchSize)
BEGIN
UPDATE TOP (#BatchSize) tab
SET tab.Value = 'abc1'
FROM TableName tab
WHERE tab.Parameter1 = 'abc'
AND tab.Parameter2 = 123
AND tab.Value <> 'abc1' COLLATE Latin1_General_100_BIN2;
-- Use a binary Collation (ending in _BIN2, not _BIN) to make sure
-- that you don't skip differences that compare the same due to
-- insensitivity of case, accent, etc, or linguistic equivalence.
SET #Rows = ##ROWCOUNT;
END;
END TRY
BEGIN CATCH
RAISERROR(stuff);
RETURN;
END CATCH;
By testing #Rows against #BatchSize, you can avoid that final UPDATE query (in most cases) because the final set is typically some number of rows less than #BatchSize, in which case we know that there are no more to process (which is what you see in the output shown in your answer). Only in those cases where the final set of rows is equal to #BatchSize will this code run a final UPDATE affecting 0 rows.
I also added a condition to the WHERE clause to prevent rows that have already been updated from being updated again.
NOTE REGARDING PERFORMANCE
I emphasized "better" above (as in, "this is a better model") because this has several improvements over the O.P.'s original code, and works fine in many cases, but is not perfect for all cases. For tables of at least a certain size (which varies due to several factors so I can't be more specific), performance will degrade as there are fewer rows to fix if either:
there is no index to support the query, or
there is an index, but at least one column in the WHERE clause is a string data type that does not use a binary collation, hence a COLLATE clause is added to the query here to force the binary collation, and doing so invalidates the index (for this particular query).
This is the situation that #mikesigs encountered, thus requiring a different approach. The updated method copies the IDs for all rows to be updated into a temporary table, then uses that temp table to INNER JOIN to the table being updated on the clustered index key column(s). (It's important to capture and join on the clustered index columns, whether or not those are the primary key columns!).
Please see #mikesigs answer below for details. The approach shown in that answer is a very effective pattern that I have used myself on many occasions. The only changes I would make are:
Explicitly create the #targetIds table rather than using SELECT INTO...
For the #targetIds table, declare a clustered primary key on the column(s).
For the #batchIds table, declare a clustered primary key on the column(s).
For inserting into #targetIds, use INSERT INTO #targetIds (column_name(s)) SELECT and remove the ORDER BY as it's unnecessary.
So, if you don't have an index that can be used for this operation, and can't temporarily create one that will actually work (a filtered index might work, depending on your WHERE clause for the UPDATE query), then try the approach shown in #mikesigs answer (and if you use that solution, please up-vote it).
WHILE EXISTS (SELECT * FROM TableName WHERE Value <> 'abc1' AND Parameter1 = 'abc' AND Parameter2 = 123)
BEGIN
UPDATE TOP (1000) TableName
SET Value = 'abc1'
WHERE Parameter1 = 'abc' AND Parameter2 = 123 AND Value <> 'abc1'
END
I encountered this thread yesterday and wrote a script based on the accepted answer. It turned out to perform very slowly, taking 12 hours to process 25M of 33M rows. I wound up cancelling it this morning and working with a DBA to improve it.
The DBA pointed out that the is null check in my UPDATE query was using a Clustered Index Scan on the PK, and it was the scan that was slowing the query down. Basically, the longer the query runs, the further it needs to look through the index for the right rows.
The approach he came up with was obvious in hind sight. Essentially, you load the IDs of the rows you want to update into a temp table, then join that onto the target table in the update statement. This uses an Index Seek instead of a Scan. And ho boy does it speed things up! It took 2 minutes to update the last 8M records.
Batching Using a Temp Table
SET NOCOUNT ON
DECLARE #Rows INT,
#BatchSize INT,
#Completed INT,
#Total INT,
#Message nvarchar(max)
SET #BatchSize = 4000
SET #Rows = #BatchSize
SET #Completed = 0
-- #targetIds table holds the IDs of ALL the rows you want to update
SELECT Id into #targetIds
FROM TheTable
WHERE Foo IS NULL
ORDER BY Id
-- Used for printing out the progress
SELECT #Total = ##ROWCOUNT
-- #batchIds table holds just the records updated in the current batch
CREATE TABLE #batchIds (Id UNIQUEIDENTIFIER);
-- Loop until #targetIds is empty
WHILE EXISTS (SELECT 1 FROM #targetIds)
BEGIN
-- Remove a batch of rows from the top of #targetIds and put them into #batchIds
DELETE TOP (#BatchSize)
FROM #targetIds
OUTPUT deleted.Id INTO #batchIds
-- Update TheTable data
UPDATE t
SET Foo = 'bar'
FROM TheTable t
JOIN #batchIds tmp ON t.Id = tmp.Id
WHERE t.Foo IS NULL
-- Get the # of rows updated
SET #Rows = ##ROWCOUNT
-- Increment our #Completed counter, for progress display purposes
SET #Completed = #Completed + #Rows
-- Print progress using RAISERROR to avoid SQL buffering issue
SELECT #Message = 'Completed ' + cast(#Completed as varchar(10)) + '/' + cast(#Total as varchar(10))
RAISERROR(#Message, 0, 1) WITH NOWAIT
-- Quick operation to delete all the rows from our batch table
TRUNCATE TABLE #batchIds;
END
-- Clean up
DROP TABLE IF EXISTS #batchIds;
DROP TABLE IF EXISTS #targetIds;
Batching the slow way, do not use!
For reference, here is the original slower performing query:
SET NOCOUNT ON
DECLARE #Rows INT,
#BatchSize INT,
#Completed INT,
#Total INT
SET #BatchSize = 4000
SET #Rows = #BatchSize
SET #Completed = 0
SELECT #Total = COUNT(*) FROM TheTable WHERE Foo IS NULL
WHILE (#Rows = #BatchSize)
BEGIN
UPDATE t
SET Foo = 'bar'
FROM TheTable t
JOIN #batchIds tmp ON t.Id = tmp.Id
WHERE t.Foo IS NULL
SET #Rows = ##ROWCOUNT
SET #Completed = #Completed + #Rows
PRINT 'Completed ' + cast(#Completed as varchar(10)) + '/' + cast(#Total as varchar(10))
END
This is a more efficient version of the solution from #Kramb. The existence check is redundant as the update where clause already handles this. Instead you just grab the rowcount and compare to batchsize.
Also note #Kramb solution didn't filter out already updated rows from the next iteration hence it would be an infinite loop.
Also uses the modern batch size syntax instead of using rowcount.
DECLARE #batchSize INT, #rowsUpdated INT
SET #batchSize = 1000;
SET #rowsUpdated = #batchSize; -- Initialise for the while loop entry
WHILE (#batchSize = #rowsUpdated)
BEGIN
UPDATE TOP (#batchSize) TableName
SET Value = 'abc1'
WHERE Parameter1 = 'abc' AND Parameter2 = 123 and Value <> 'abc1';
SET #rowsUpdated = ##ROWCOUNT;
END
I want share my experience. A few days ago I have to update 21 million records in table with 76 million records. My colleague suggested the next variant.
For example, we have the next table 'Persons':
Id | FirstName | LastName | Email | JobTitle
1 | John | Doe | abc1#abc.com | Software Developer
2 | John1 | Doe1 | abc2#abc.com | Software Developer
3 | John2 | Doe2 | abc3#abc.com | Web Designer
Task: Update persons to the new Job Title: 'Software Developer' -> 'Web Developer'.
1. Create Temporary Table 'Persons_SoftwareDeveloper_To_WebDeveloper (Id INT Primary Key)'
2. Select into temporary table persons which you want to update with the new Job Title:
INSERT INTO Persons_SoftwareDeveloper_To_WebDeveloper SELECT Id FROM
Persons WITH(NOLOCK) --avoid lock
WHERE JobTitle = 'Software Developer'
OPTION(MAXDOP 1) -- use only one core
Depends on rows count, this statement will take some time to fill your temporary table, but it would avoid locks. In my situation it took about 5 minutes (21 million rows).
3. The main idea is to generate micro sql statements to update database. So, let's print them:
DECLARE #i INT, #pagesize INT, #totalPersons INT
SET #i=0
SET #pagesize=2000
SELECT #totalPersons = MAX(Id) FROM Persons
while #i<= #totalPersons
begin
Print '
UPDATE persons
SET persons.JobTitle = ''ASP.NET Developer''
FROM Persons_SoftwareDeveloper_To_WebDeveloper tmp
JOIN Persons persons ON tmp.Id = persons.Id
where persons.Id between '+cast(#i as varchar(20)) +' and '+cast(#i+#pagesize as varchar(20)) +'
PRINT ''Page ' + cast((#i / #pageSize) as varchar(20)) + ' of ' + cast(#totalPersons/#pageSize as varchar(20))+'
GO
'
set #i=#i+#pagesize
end
After executing this script you will receive hundreds of batches which you can execute in one tab of MS SQL Management Studio.
4. Run printed sql statements and check for locks on table. You always can stop process and play with #pageSize to speed up or speed down updating(don't forget to change #i after you pause script).
5. Drop Persons_SoftwareDeveloper_To_AspNetDeveloper. Remove temporary table.
Minor Note: This migration could take a time and new rows with invalid data could be inserted during migration. So, firstly fix places where your rows adds. In my situation I fixed UI, 'Software Developer' -> 'Web Developer'.
More about this method on my blog https://yarkul.com/how-smoothly-insert-millions-of-rows-in-sql-server/
Your print is messing things up, because it resets ##ROWCOUNT. Whenever you use ##ROWCOUNT, my advice is to always set it immediately to a variable. So:
DECLARE #RC int;
WHILE #RC > 0 or #RC IS NULL
BEGIN
SET rowcount 5;
UPDATE TableName
SET Value = 'abc1'
WHERE Parameter1 = 'abc' AND Parameter2 = 123 AND Value <> 'abc1';
SET #RC = ##ROWCOUNT;
PRINT(##ROWCOUNT)
END;
SET rowcount = 0;
And, another nice feature is that you don't need to repeat the update code.
First of all, thank you all for your inputs. I tweak my Query - 1 and got my desired result. Gordon Linoff is right, PRINT was messing up my query so I modified it as following:
Modified Query - 1:
SET ROWCOUNT 5
WHILE (1 = 1)
BEGIN
BEGIN TRANSACTION
UPDATE TableName
SET Value = 'abc1'
WHERE Parameter1 = 'abc' AND Parameter2 = 123
IF ##ROWCOUNT = 0
BEGIN
COMMIT TRANSACTION
BREAK
END
COMMIT TRANSACTION
END
SET ROWCOUNT 0
Output:
(5 row(s) affected)
(5 row(s) affected)
(4 row(s) affected)
(0 row(s) affected)

SQL Server : Query not running as needed

I am working with Sage Evolution and do a lot of the back end stuff to customize it for our company.
I need to write a query where, when a user enters a negative quantity the system must not allow the transaction, however when the user enters a negative quantity and the product belongs to the "chemicals" group it needs to process the transaction.
Here is my code I have written so far.
DECLARE
#iAfterfQuantity Int;
#iAfteriStockCodeID Int;
#iAfterStockItemGroup VarChar
SELECT
#iAfterfQuantity = fQuantity,
#iAfteriStockCodeID = iStockCodeID
FROM
INSERTED
SELECT
#iAfterStockItemGroup = ItemGroup
FROM
dbo.stkItem
WHERE
StockLink = #iAfteriStockCodeID
BEGIN
IF #iAfterfQuantity < 0 AND #iAfterStockItemGroup <> 'chemicals'
BEGIN
RAISERROR ('',16,1)
ROLLBACK TRANSACTION
END
END
This is a task better suited for a check constraint then for a trigger, especially considering the fact that you are raising an error.
First, create the check function:
CREATE FUNCTION fn_FunctionName
(
#iAfterfQuantity Int,
#iAfteriStockCodeID Int
)
RETURNS bit
AS
BEGIN
DECLARE #iAfterStockItemGroup VarChar(150) -- Must specify length!
SELECT #iAfterStockItemGroup = ItemGroup FROM dbo.stkItem WHERE StockLink=#iAfteriStockCodeID
IF #iAfterfQuantity < 0 AND #iAfterStockItemGroup <> 'chemicals'
RETURN 0
RETURN 1 -- will be executed only if the condition is false...
END
Then, alter your table to add the check constraint:
ALTER TABLE YourTableName
ADD CONSTRAINT ck_ConstraintName
CHECK (dbo.fn_FunctionName(fQuantity, iStockCodeID) = 1)
GO

Not reinitialize variable inside While Loop

I have created an stored procedure in SQL Server 2008 and it contains the following code snippet:
WHILE (#COUNTER <= #COUNT_ROWSTOBEPROCESSED )
BEGIN
DECLARE #INFID AS INT =0;
DECLARE #TPID AS INT =0;
SELECT #INFID = InfID FROM #TEMPTABLE WHERE ##ROWCOUNT = #COUNTER;
print #INFID ;
SET #COUNTER = #COUNTER + 1;
END
But it gives me following result:
(106 row(s) affected)
0
-26
0
0
0
0
0
0
0
0 all are zeros ....
I have already values in the temp table but it is not initialized into #INFID variable.
Why it is happening? I don't understand...
Please help me, if anyone have a solution.
----Comment---
yeap, I have seen into my code. But here I have already added counter increment in my code. but still i have same problem.
when i have run the below query inside the loop. i found that ##RowCount is always 1.
SELECT InfID,#ROWCOUNT FROM #TEMPTABLE WHERE ##ROWCOUNT = #COUNTER;
Can u tell me how to iterate each row of temptable using while loop or for loop in sql?
WHERE ##ROWCOUNT = #COUNTER is incorrect.
##ROWCOUNT returns the number of rows affected by the previous statement.
It will only evaluate to true once. (When #COUNTER = 1) so the rest of the time no assignment is made to #INFID from the SELECT operation.
What are you trying to do? My best guess is that you need to add an IDENTITY column to your table and reference that instead of ##ROWCOUNT or use a cursor.
But if you explain your end goal we may be able to tell you a set based way of achieving whatever your goal is without RBAR processing.

select TOP (all)

declare #t int
set #t = 10
if (o = 'mmm') set #t = -1
select top(#t) * from table
What if I want generally it resulted with 10 rows, but rarely all of them.
I know I can do this through "SET ROWCOUNT". But is there some variable number, like -1, that causing TOP to result all elements.
The largest possible value that can be passed to TOP is 9223372036854775807 so you could just pass that.
Below I use the binary form for max signed bigint as it is easier to remember as long as you know the basic pattern and that bigint is 8 bytes.
declare #t bigint = case when some_condition then 10 else 0x7fffffffffffffff end;
select top(#t) *
From table
If you dont have an order by clause the top 10 will just be any 10 and optimisation dependant.
If you do have an order by clause to define the top 10 and an index to support it then the plan for the query above should be fine for either possible value.
If you don't have a supporting index and the plan shows a sort you should consider splitting into two queries.
im not sure I understand your question.
But if you sometimes want TOP and other times don't just use if / else construct:
if (condition)
'send TOP
SELECT TOP 10 Blah FROM...
else
SELECT blah1, blah2 FROM...
You can use dynamic SQL (but I, personally, try to avoid dynamic SQL), where you create a string of the statement you want to run from conditions or parameters.
There's also some good information here on how to do it without dynamic SQL:
https://web.archive.org/web/20150520123828/http://sqlserver2000.databases.aspfaq.com:80/how-do-i-use-a-variable-in-a-top-clause-in-sql-server.html
declare #top bigint = NULL
declare #top_max_value bigint = 9223372036854775807
if (#top IS NULL)
begin
set #top = #top_max_value
end
select top(#top) *
from [YourTableName]
a dynamic sql version isn't that's hard to do.
CREATE PROCEDURE [dbo].[VariableTopSelect]
-- Add the parameters for the stored procedure here
#t int
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
declare #sql nvarchar(max)
if (#t=10)
begin
set #sql='select top (10) * from table'
end
else
begin
set #sql='select * from table'
end
exec sp_executesql #sql
END
with this sp, if they send 10 to the sp, it'll select the top 10, otherwise it'll select all.
The best solution I've found is to select the needed columns with all of your conditions into a temporary table, then do your conditional top:
DECLARE #TempTable TABLE(cols...)
INSERT INTO #TempTable
SELECT blah FROM ...
if (condition)
SELECT TOP 10 * FROM #tempTable
else
SELECT * FROM #tempTable
This way you follow DRY, get your conditional TOP, and are just as easy to read.
Cheers.
It is also possible with a UNION and a parameter
SELECT DISTINCT TOP 10
Column1, Column2
FROM Table
WHERE #ShowAllResults = 0
UNION
SELECT DISTINCT
Column1, Column2
FROM Table
WHERE #ShowAllResults = 1
I might be too late now, or getting too old
But I solved that by using Top(100)Percent
This goes around all complexities
Select Top(100)Percent * from tablename;
Use the statement "SET ROWCOUNT #recordCount" at the beginning of the result query.The variable "#recordCount" can be any positive integer. It should be 0 to return all records.
that means , "SET ROWCOUNT 0" will return all records and "SET ROWCOUNT 15" will return only the TOP 15 rows of result set.
Drawback can be the Performance hit when dealing with large number of records. Also the SET ROWCOUNT will be effective throughout the scope of execution of the whole query.

Subquery returned more than 1 value. This is not permit error in AFTER INSERT,UPDATE trigger

Once again.. i have the trigger below which has the function to keep/set the value in column esb for maximum 1 row to value 0 (in each row the value cycles from Q->0->R->1)
When i insert more than 1 row the trigger fails with an "Subquery returned more than 1 value. This is not permitted when the subquery follows" error on row 38, the "IF ((SELECT esb FROM INSERTED) in ('1','Q'))" statment.
I understand that 'SELECT esb FROM INSERTED' will return all rows of the insert, but do not know how to process one row at a time. I also tried it by creating a temporary table and iterating through the resultset, but subsequently found out that temporary tables based on the INSERTED table are not allowed.
any suggestions are welcome (again)
set ANSI_NULLS ON
set QUOTED_IDENTIFIER ON
go
ALTER TRIGGER [TR_PHOTO_AIU]
ON [SOA].[dbo].[photos_TEST]
AFTER INSERT,UPDATE
AS
DECLARE #MAXCONC INT -- Maximum concurrent processes
DECLARE #CONC INT -- Actual concurrent processes
SET #MAXCONC = 1 -- 1 concurrent process
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON
-- If column esb is involved in the update, does not necessarily mean
-- that the column itself is updated
If ( Update(ESB) )
BEGIN
-- If column esb has been changed to 1 or Q
IF ((SELECT esb FROM INSERTED) in ('1','Q'))
BEGIN
-- count the number of (imminent) active processes
SET #CONC = (SELECT COUNT(*)
FROM SOA.dbo.photos_TEST pc
WHERE pc.esb in ('0','R'))
-- if maximum has not been reached
IF NOT ( #CONC >= #MAXCONC )
BEGIN
-- set additional rows esb to '0' to match #MAXCONC
UPDATE TOP(#MAXCONC-#CONC) p2
SET p2.esb = '0'
FROM ( SELECT TOP(#MAXCONC-#CONC) p1.esb
FROM SOA.dbo.photos_TEST p1
INNER JOIN INSERTED i ON i.Value = p1.Value
AND i.ArrivalDateTime > p1.ArrivalDateTime
WHERE p1.esb = 'Q'
ORDER BY p1.arrivaldatetime ASC
) p2
END
END
END
Try to rewrite your IF as:
IF EXISTS(SELECT 1 FROM INSERTED WHERE esb IN ('1','Q'))
...

Resources