Optimistic concurrency on multi-table complex entity - sql-server

I have a complex entity (let's call it Thing) which is represented in SQL Server as many tables: one parent table dbo.Thing with several child tables dbo.ThingBodyPart, dbo.ThingThought, etc. We've implemented optimistic concurrency using a single rowversion column on dbo.Thing, using the UPDATE OUTPUT INTO technique. This has been working great, until we added a trigger to dbo.Thing. I'm looking for advice in choosing a different approach, because I'm fairly convinced that my current approach cannot be fixed.
Here is our current code:
CREATE PROCEDURE dbo.UpdateThing
#id uniqueidentifier,
-- ...
-- ... other parameters describing what to update...
-- ...
#rowVersion binary(8) OUTPUT
AS
BEGIN TRANSACTION;
BEGIN TRY
-- ...
-- ... update lots of Thing's child rows...
-- ...
DECLARE #t TABLE (
[RowVersion] binary(8) NOT NULL
);
UPDATE dbo.Thing
SET ModifiedUtc = sysutcdatetime()
OUTPUT INSERTED.[RowVersion] INTO #t
WHERE
Id = #id
AND [RowVersion] = #rowVersion;
IF ##ROWCOUNT = 0 RAISERROR('Thing has been updated by another user.', 16, 1);
COMMIT;
SELECT #rowVersion = [RowVersion] FROM #t;
END TRY
BEGIN CATCH
IF ##TRANCOUNT > 0 ROLLBACK;
EXEC usp_Rethrow_Error;
END CATCH
This worked absolutely beautifully, until we added an INSTEAD OF UPDATE trigger to dbo.Thing. Now the stored procedure no longer returns the new #rowVersion value, but returns the old unmodified value. I'm at a loss. Are there other ways to approach optimistic concurrency that would be as effective and easy as the one above, but would also work with triggers?
To illustrate what exactly goes wrong with this code, consider this test code:
DECLARE
#id uniqueidentifier = 'b0442c71-dbcb-4e0c-a178-1a01b9efaf0f',
#oldRowVersion binary(8),
#newRowVersion binary(8),
#expected binary(8);
SELECT #oldRowVersion = [RowVersion]
FROM dbo.Thing
WHERE Id = #id;
PRINT '#oldRowVersion = ' + convert(char(18), #oldRowVersion, 1);
DECLARE #t TABLE (
[RowVersion] binary(8) NOT NULL
);
UPDATE dbo.Thing
SET ModifiedUtc = sysutcdatetime()
OUTPUT INSERTED.[RowVersion] INTO #t
WHERE
Id = #id
AND [RowVersion] = #oldRowVersion;
PRINT '##ROWCOUNT = ' + convert(varchar(10), ##ROWCOUNT);
SELECT #newRowVersion = [RowVersion] FROM #t;
PRINT '#newRowVersion = ' + convert(char(18), #newRowVersion, 1);
SELECT #expected = [RowVersion]
FROM dbo.Thing
WHERE Id = #id;
PRINT '#expected = ' + convert(char(18), #expected, 1);
IF #newRowVersion = #expected PRINT 'Pass!'
ELSE PRINT 'Fail. :('
When the trigger is not present, this code correctly outputs:
#oldRowVersion = 0x0000000000016CDC
(1 row(s) affected)
##ROWCOUNT = 1
#newRowVersion = 0x000000000004E9D1
#expected = 0x000000000004E9D1
Pass!
When the trigger is present, we do not receive the expected value:
#oldRowVersion = 0x0000000000016CDC
(1 row(s) affected)
(1 row(s) affected)
##ROWCOUNT = 1
#newRowVersion = 0x0000000000016CDC
#expected = 0x000000000004E9D1
Fail. :(
Any ideas for a different approach?
I was assuming that an UPDATE was an atomic operation, which it is, except when there are triggers, when apparently it's not. Am I wrong? This seems really bad, in my opinion, with potential concurrency bugs lurking behind every statement. If the trigger really is INSTEAD OF, shouldn't I get back the correct timestamp, as though the trigger's UPDATE was the one I actually executed? Is this a SQL Server bug?

One of my esteemed co-workers, Jonathan MacCollum, pointed me to this bit of documentation:
INSERTED
Is a column prefix that specifies the value added by the insert or update operation.
Columns prefixed with INSERTED reflect the value after the UPDATE, INSERT, or MERGE statement is completed but before triggers are executed.
From this, I presume that I need to modify my stored procedure, splitting the one UPDATE into an UPDATE followed by a SELECT [RowVersion] ....
UPDATE dbo.Thing
SET ModifiedUtc = sysutcdatetime()
WHERE
Id = #id
AND [RowVersion] = #rowVersion;
IF ##ROWCOUNT = 0 RAISERROR('Thing has been updated by another user.', 16, 1);
COMMIT;
SELECT #rowVersion = [RowVersion]
FROM dbo.Thing
WHERE Id = #id;
I think I can still rest assured that my stored procedure is not accidentally overwriting anybody else's changes, but I should no longer assume that the data that the caller of the stored procedure holds is still up-to-date. There's a chance that the new #rowVersion value returned by the stored procedure is actually the result of someone else's update, not mine. So actually, there's no point in returning the #rowVersion at all. After executing this stored procedure, the caller should re-fetch the Thing and all of its child records in order to be sure its picture of the data is consistent.
... which further leads me to conclude that rowversion columns are not the best choice for implementing optimistic locking, which sadly is their sole purpose. I would be much better off using a manually incremented int column, with a query like:
UPDATE dbo.Thing
SET Version = #version + 1
WHERE
Id = #id
AND Version = #version;
The Version column is checked and incremented in a single atomic operation, so there's no chance for other statements to slip in-between. I don't have to ask the database what the new value is, because I told it what the new value is. As long as the Version column contains the value I'm expecting (and assuming all other people updating this row are also playing by the rules - correctly incrementing Version), I can know that the Thing is still exactly as I left it. At least, I think...

Related

How to read updated data in a stored procedure called multiple times simultaneously

There are 2 tables:
Wallets;
Transactions.
There is a stored procedure that handles (I think with ACID operation):
updating on Wallet table
inserting one row into Transactions table
every time it is called.
The issue occurs when there are many calls to the SP at same time, infact the value of PreviousBalance is not correct (sequentially wrong), cause in the SP read old value, meantime another process of call is running.
To understand better look the following screenshot.
There are 3 Transaction with same DT (IDs 1289, 1288, 1287), in all of those PreviouseBalance is equal, but is not correct, because the value for :
Trx ID 1288 should be 180,78 as Balance of previous row;
Trx ID 1289 should be 168,07 = 180,78 - 12,08
I think that the issue is in the SET of #OLDBalance var; at same time those 3 thread read same value, so when the SP goes to INSERT loads same value of PreviousBalance.
How can I do in order to read #OLDBalance correct after commit of one operation?
I tried to set several type of Isolation Levet into SP, the result was the same and sometime went in error for deadlock.
I have the following stored procedure:
Stored Procedure
ALTER PROCEDURE [dbo].[upsMovimenta_internal]
#AccountID int,
#Amount money,
#TypeTransactionID int,
#ProductID int,
#notes nvarchar(max)
AS
BEGIN
SET NOCOUNT ON;
DECLARE #OLDBalance MONEY;
DECLARE #PreviousBalance MONEY;
DECLARE #CurrentBalance MONEY;
DECLARE #Molt float;
BEGIN TRANSACTION;
IF NOT EXISTS( SELECT * FROM Accounts WHERE AccountID = #AccountID)
BEGIN
RaisError(N'Account not found ', 10, 1, N'number', 3);
return (1)
END
SELECT #Molt = Moltiplicatore
FROM TypeTransactions
where TypeTransactionID = #TypeTransactionID
;
IF (#Molt is null )
BEGIN
RaisError(N'Error transaction', 10, 1, N'number', 3);
return (1)
END
SET #Amount = #Amount * #Molt;
--SELECT * FROM Wallets
SELECT TOP 1 #OLDBalance = TotalAmount
FROM Wallets
where AccountID = #AccountID
;
SET #CurrentBalance = #OLDBalance + #Amount;
IF (#ProductID = 1 )
BEGIN
UPDATE Wallets
SET TotalAmount+=#Amount,
Cash+=#Amount
FROM Wallets where AccountID = #AccountID
;
END
IF (#ProductID = 2 )
BEGIN
UPDATE Wallets
SET TotalAmount+=#Amount,
Fun+=#Amount
FROM Wallets where AccountID = #AccountID
;
END
INSERT INTO Transactions
( AccountID, ProductID, DT, TypeTransactionID, Amout, Balance, PreviousBalance, Notes )
VALUES
( #AccountID, #ProductID, GETDATE(), #TypeTransactionID, #Amount, #CurrentBalance, #OLDBalance, #notes)
;
COMMIT TRANSACTION;
return (0)
END
Thank you so much guys
Generally, one way of managing locks on records, is to apply a dummy update on the rows you want to work on, right after starting transaction.
In this case SQL Server guarantees that those rows will be locked and no other transactions can access the rows. So you can change your design to something like this:
begin tran
update myTable
set Field1 = Field1
where someKeyField = 212
-- do the same for other tables that you want to protect against change
-- at this moment all working rows will be locked, other procedure calls will be on hold
-- do your main operations here
commit tran
The issue with this will be the other proc calls will wait and this may degrade performance or even time-out if the traffic is high and your operation in this proc is lengthy
If you are working on high transaction environment, you need to change your design.
Update: Design Suggestion
I don't get why you have PreviousBalance and Balance in your transaction (it is against the design rules, however you can override rule in special case).
Probably you have that to speed up your calculations or make your queries simpler. But it is not good practice in OLTP database.
Rules say you keep the Amount column and calculate PreviousBalance and Balance somewhere else.
You should drop PreviousBalance but keep the Balance column, and every time you insert a transaction, you update (increase/decrease) the Balance column. Plus you need to initialize the Balance column at the first transaction.
This is what I can think of. If I knew your whole system, I would be able to have better ideas though.

How to update large table with millions of rows in SQL Server?

I've an UPDATE statement which can update more than million records. I want to update them in batches of 1000 or 10000. I tried with ##ROWCOUNT but I am unable to get desired result.
Just for testing purpose what I did is, I selected table with 14 records and set a row count of 5. This query is supposed to update records in 5, 5 and 4 but it just updates first 5 records.
Query - 1:
SET ROWCOUNT 5
UPDATE TableName
SET Value = 'abc1'
WHERE Parameter1 = 'abc' AND Parameter2 = 123
WHILE ##ROWCOUNT > 0
BEGIN
SET rowcount 5
UPDATE TableName
SET Value = 'abc1'
WHERE Parameter1 = 'abc' AND Parameter2 = 123
PRINT (##ROWCOUNT)
END
SET rowcount 0
Query - 2:
SET ROWCOUNT 5
WHILE (##ROWCOUNT > 0)
BEGIN
BEGIN TRANSACTION
UPDATE TableName
SET Value = 'abc1'
WHERE Parameter1 = 'abc' AND Parameter2 = 123
PRINT (##ROWCOUNT)
IF ##ROWCOUNT = 0
BEGIN
COMMIT TRANSACTION
BREAK
END
COMMIT TRANSACTION
END
SET ROWCOUNT 0
What am I missing here?
You should not be updating 10k rows in a set unless you are certain that the operation is getting Page Locks (due to multiple rows per page being part of the UPDATE operation). The issue is that Lock Escalation (from either Row or Page to Table locks) occurs at 5000 locks. So it is safest to keep it just below 5000, just in case the operation is using Row Locks.
You should not be using SET ROWCOUNT to limit the number of rows that will be modified. There are two issues here:
It has that been deprecated since SQL Server 2005 was released (11 years ago):
Using SET ROWCOUNT will not affect DELETE, INSERT, and UPDATE statements in a future release of SQL Server. Avoid using SET ROWCOUNT with DELETE, INSERT, and UPDATE statements in new development work, and plan to modify applications that currently use it. For a similar behavior, use the TOP syntax
It can affect more than just the statement you are dealing with:
Setting the SET ROWCOUNT option causes most Transact-SQL statements to stop processing when they have been affected by the specified number of rows. This includes triggers. The ROWCOUNT option does not affect dynamic cursors, but it does limit the rowset of keyset and insensitive cursors. This option should be used with caution.
Instead, use the TOP () clause.
There is no purpose in having an explicit transaction here. It complicates the code and you have no handling for a ROLLBACK, which isn't even needed since each statement is its own transaction (i.e. auto-commit).
Assuming you find a reason to keep the explicit transaction, then you do not have a TRY / CATCH structure. Please see my answer on DBA.StackExchange for a TRY / CATCH template that handles transactions:
Are we required to handle Transaction in C# Code as well as in Store procedure
I suspect that the real WHERE clause is not being shown in the example code in the Question, so simply relying upon what has been shown, a better model (please see note below regarding performance) would be:
DECLARE #Rows INT,
#BatchSize INT; -- keep below 5000 to be safe
SET #BatchSize = 2000;
SET #Rows = #BatchSize; -- initialize just to enter the loop
BEGIN TRY
WHILE (#Rows = #BatchSize)
BEGIN
UPDATE TOP (#BatchSize) tab
SET tab.Value = 'abc1'
FROM TableName tab
WHERE tab.Parameter1 = 'abc'
AND tab.Parameter2 = 123
AND tab.Value <> 'abc1' COLLATE Latin1_General_100_BIN2;
-- Use a binary Collation (ending in _BIN2, not _BIN) to make sure
-- that you don't skip differences that compare the same due to
-- insensitivity of case, accent, etc, or linguistic equivalence.
SET #Rows = ##ROWCOUNT;
END;
END TRY
BEGIN CATCH
RAISERROR(stuff);
RETURN;
END CATCH;
By testing #Rows against #BatchSize, you can avoid that final UPDATE query (in most cases) because the final set is typically some number of rows less than #BatchSize, in which case we know that there are no more to process (which is what you see in the output shown in your answer). Only in those cases where the final set of rows is equal to #BatchSize will this code run a final UPDATE affecting 0 rows.
I also added a condition to the WHERE clause to prevent rows that have already been updated from being updated again.
NOTE REGARDING PERFORMANCE
I emphasized "better" above (as in, "this is a better model") because this has several improvements over the O.P.'s original code, and works fine in many cases, but is not perfect for all cases. For tables of at least a certain size (which varies due to several factors so I can't be more specific), performance will degrade as there are fewer rows to fix if either:
there is no index to support the query, or
there is an index, but at least one column in the WHERE clause is a string data type that does not use a binary collation, hence a COLLATE clause is added to the query here to force the binary collation, and doing so invalidates the index (for this particular query).
This is the situation that #mikesigs encountered, thus requiring a different approach. The updated method copies the IDs for all rows to be updated into a temporary table, then uses that temp table to INNER JOIN to the table being updated on the clustered index key column(s). (It's important to capture and join on the clustered index columns, whether or not those are the primary key columns!).
Please see #mikesigs answer below for details. The approach shown in that answer is a very effective pattern that I have used myself on many occasions. The only changes I would make are:
Explicitly create the #targetIds table rather than using SELECT INTO...
For the #targetIds table, declare a clustered primary key on the column(s).
For the #batchIds table, declare a clustered primary key on the column(s).
For inserting into #targetIds, use INSERT INTO #targetIds (column_name(s)) SELECT and remove the ORDER BY as it's unnecessary.
So, if you don't have an index that can be used for this operation, and can't temporarily create one that will actually work (a filtered index might work, depending on your WHERE clause for the UPDATE query), then try the approach shown in #mikesigs answer (and if you use that solution, please up-vote it).
WHILE EXISTS (SELECT * FROM TableName WHERE Value <> 'abc1' AND Parameter1 = 'abc' AND Parameter2 = 123)
BEGIN
UPDATE TOP (1000) TableName
SET Value = 'abc1'
WHERE Parameter1 = 'abc' AND Parameter2 = 123 AND Value <> 'abc1'
END
I encountered this thread yesterday and wrote a script based on the accepted answer. It turned out to perform very slowly, taking 12 hours to process 25M of 33M rows. I wound up cancelling it this morning and working with a DBA to improve it.
The DBA pointed out that the is null check in my UPDATE query was using a Clustered Index Scan on the PK, and it was the scan that was slowing the query down. Basically, the longer the query runs, the further it needs to look through the index for the right rows.
The approach he came up with was obvious in hind sight. Essentially, you load the IDs of the rows you want to update into a temp table, then join that onto the target table in the update statement. This uses an Index Seek instead of a Scan. And ho boy does it speed things up! It took 2 minutes to update the last 8M records.
Batching Using a Temp Table
SET NOCOUNT ON
DECLARE #Rows INT,
#BatchSize INT,
#Completed INT,
#Total INT,
#Message nvarchar(max)
SET #BatchSize = 4000
SET #Rows = #BatchSize
SET #Completed = 0
-- #targetIds table holds the IDs of ALL the rows you want to update
SELECT Id into #targetIds
FROM TheTable
WHERE Foo IS NULL
ORDER BY Id
-- Used for printing out the progress
SELECT #Total = ##ROWCOUNT
-- #batchIds table holds just the records updated in the current batch
CREATE TABLE #batchIds (Id UNIQUEIDENTIFIER);
-- Loop until #targetIds is empty
WHILE EXISTS (SELECT 1 FROM #targetIds)
BEGIN
-- Remove a batch of rows from the top of #targetIds and put them into #batchIds
DELETE TOP (#BatchSize)
FROM #targetIds
OUTPUT deleted.Id INTO #batchIds
-- Update TheTable data
UPDATE t
SET Foo = 'bar'
FROM TheTable t
JOIN #batchIds tmp ON t.Id = tmp.Id
WHERE t.Foo IS NULL
-- Get the # of rows updated
SET #Rows = ##ROWCOUNT
-- Increment our #Completed counter, for progress display purposes
SET #Completed = #Completed + #Rows
-- Print progress using RAISERROR to avoid SQL buffering issue
SELECT #Message = 'Completed ' + cast(#Completed as varchar(10)) + '/' + cast(#Total as varchar(10))
RAISERROR(#Message, 0, 1) WITH NOWAIT
-- Quick operation to delete all the rows from our batch table
TRUNCATE TABLE #batchIds;
END
-- Clean up
DROP TABLE IF EXISTS #batchIds;
DROP TABLE IF EXISTS #targetIds;
Batching the slow way, do not use!
For reference, here is the original slower performing query:
SET NOCOUNT ON
DECLARE #Rows INT,
#BatchSize INT,
#Completed INT,
#Total INT
SET #BatchSize = 4000
SET #Rows = #BatchSize
SET #Completed = 0
SELECT #Total = COUNT(*) FROM TheTable WHERE Foo IS NULL
WHILE (#Rows = #BatchSize)
BEGIN
UPDATE t
SET Foo = 'bar'
FROM TheTable t
JOIN #batchIds tmp ON t.Id = tmp.Id
WHERE t.Foo IS NULL
SET #Rows = ##ROWCOUNT
SET #Completed = #Completed + #Rows
PRINT 'Completed ' + cast(#Completed as varchar(10)) + '/' + cast(#Total as varchar(10))
END
This is a more efficient version of the solution from #Kramb. The existence check is redundant as the update where clause already handles this. Instead you just grab the rowcount and compare to batchsize.
Also note #Kramb solution didn't filter out already updated rows from the next iteration hence it would be an infinite loop.
Also uses the modern batch size syntax instead of using rowcount.
DECLARE #batchSize INT, #rowsUpdated INT
SET #batchSize = 1000;
SET #rowsUpdated = #batchSize; -- Initialise for the while loop entry
WHILE (#batchSize = #rowsUpdated)
BEGIN
UPDATE TOP (#batchSize) TableName
SET Value = 'abc1'
WHERE Parameter1 = 'abc' AND Parameter2 = 123 and Value <> 'abc1';
SET #rowsUpdated = ##ROWCOUNT;
END
I want share my experience. A few days ago I have to update 21 million records in table with 76 million records. My colleague suggested the next variant.
For example, we have the next table 'Persons':
Id | FirstName | LastName | Email | JobTitle
1 | John | Doe | abc1#abc.com | Software Developer
2 | John1 | Doe1 | abc2#abc.com | Software Developer
3 | John2 | Doe2 | abc3#abc.com | Web Designer
Task: Update persons to the new Job Title: 'Software Developer' -> 'Web Developer'.
1. Create Temporary Table 'Persons_SoftwareDeveloper_To_WebDeveloper (Id INT Primary Key)'
2. Select into temporary table persons which you want to update with the new Job Title:
INSERT INTO Persons_SoftwareDeveloper_To_WebDeveloper SELECT Id FROM
Persons WITH(NOLOCK) --avoid lock
WHERE JobTitle = 'Software Developer'
OPTION(MAXDOP 1) -- use only one core
Depends on rows count, this statement will take some time to fill your temporary table, but it would avoid locks. In my situation it took about 5 minutes (21 million rows).
3. The main idea is to generate micro sql statements to update database. So, let's print them:
DECLARE #i INT, #pagesize INT, #totalPersons INT
SET #i=0
SET #pagesize=2000
SELECT #totalPersons = MAX(Id) FROM Persons
while #i<= #totalPersons
begin
Print '
UPDATE persons
SET persons.JobTitle = ''ASP.NET Developer''
FROM Persons_SoftwareDeveloper_To_WebDeveloper tmp
JOIN Persons persons ON tmp.Id = persons.Id
where persons.Id between '+cast(#i as varchar(20)) +' and '+cast(#i+#pagesize as varchar(20)) +'
PRINT ''Page ' + cast((#i / #pageSize) as varchar(20)) + ' of ' + cast(#totalPersons/#pageSize as varchar(20))+'
GO
'
set #i=#i+#pagesize
end
After executing this script you will receive hundreds of batches which you can execute in one tab of MS SQL Management Studio.
4. Run printed sql statements and check for locks on table. You always can stop process and play with #pageSize to speed up or speed down updating(don't forget to change #i after you pause script).
5. Drop Persons_SoftwareDeveloper_To_AspNetDeveloper. Remove temporary table.
Minor Note: This migration could take a time and new rows with invalid data could be inserted during migration. So, firstly fix places where your rows adds. In my situation I fixed UI, 'Software Developer' -> 'Web Developer'.
More about this method on my blog https://yarkul.com/how-smoothly-insert-millions-of-rows-in-sql-server/
Your print is messing things up, because it resets ##ROWCOUNT. Whenever you use ##ROWCOUNT, my advice is to always set it immediately to a variable. So:
DECLARE #RC int;
WHILE #RC > 0 or #RC IS NULL
BEGIN
SET rowcount 5;
UPDATE TableName
SET Value = 'abc1'
WHERE Parameter1 = 'abc' AND Parameter2 = 123 AND Value <> 'abc1';
SET #RC = ##ROWCOUNT;
PRINT(##ROWCOUNT)
END;
SET rowcount = 0;
And, another nice feature is that you don't need to repeat the update code.
First of all, thank you all for your inputs. I tweak my Query - 1 and got my desired result. Gordon Linoff is right, PRINT was messing up my query so I modified it as following:
Modified Query - 1:
SET ROWCOUNT 5
WHILE (1 = 1)
BEGIN
BEGIN TRANSACTION
UPDATE TableName
SET Value = 'abc1'
WHERE Parameter1 = 'abc' AND Parameter2 = 123
IF ##ROWCOUNT = 0
BEGIN
COMMIT TRANSACTION
BREAK
END
COMMIT TRANSACTION
END
SET ROWCOUNT 0
Output:
(5 row(s) affected)
(5 row(s) affected)
(4 row(s) affected)
(0 row(s) affected)

Duplicate Auto Numbers generated in SQL Server

Be gentle, I'm a SQL newbie. I have a table named autonumber_settings like this:
Prefix | AutoNumber
SO | 112320
CA | 3542
A whenever a new sales line is created, a stored procedure is called that reads the current autonumber value from the 'SO' row, then increments the number, updates that same row, and return the number back from the stored procedure. The stored procedure is below:
ALTER PROCEDURE [dbo].[GetAutoNumber]
(
#type nvarchar(50) ,
#out nvarchar(50) = '' OUTPUT
)
as
set nocount on
declare #currentvalue nvarchar(50)
declare #prefix nvarchar(10)
if exists (select * from autonumber_settings where lower(autonumber_type) = lower(#type))
begin
select #prefix = isnull(autonumber_prefix,''),#currentvalue=autonumber_currentvalue
from autonumber_settings
where lower(autonumber_type) = lower(#type)
set #currentvalue = #currentvalue + 1
update dbo.autonumber_settings set autonumber_currentvalue = #currentvalue where lower(autonumber_type) = lower(#type)
set #out = cast(#prefix as nvarchar(10)) + cast(#currentvalue as nvarchar(50))
select #out as value
end
else
select '' as value
Now, there is another procedure that accesses the same table that duplicates orders, copying both the header and the lines. On occasion, the duplication results in duplicate line numbers. Here is a piece of that procedure:
BEGIN TRAN
IF exists
(
SELECT *
FROM autonumber_settings
WHERE autonumber_type = 'SalesOrderDetail'
)
BEGIN
SELECT
#prefix = ISNULL(autonumber_prefix,'')
,#current_value=CAST (autonumber_currentvalue AS INTEGER)
FROM autonumber_settings
WHERE autonumber_type = 'SalesOrderDetail'
SET #new_auto_number = #current_value + #number_of_lines
UPDATE dbo.autonumber_settings
SET autonumber_currentvalue = #new_auto_number
WHERE autonumber_type = 'SalesOrderDetail'
END
COMMIT TRAN
Any ideas on why the two procedures don't seem to play well together, occasionally giving the same line numbers created from scratch as lines created by duplication.
This is a race condition or your autonumber assignment. Two executions have the potential to read out the same value before a new one is written back to the database.
The best way to fix this is to use an identity column and let SQL server handle the autonumber assignments.
Barring that you could use sp_getapplock to serialize your access to autonumber_settings.
You could use repeatable read on the selects. That will lock the row and block the other procedure's select until you update the value and commit.
Insert WITH (REPEATABLEREAD,ROWLOCK) after the from clause for each select.

TSQL Trigger Not Saving Variables and/or not Executing Properly

I've having trouble getting a TSQL trigger to even work correctly. I've run it through the debugger and it's not setting any of the variables according to SQL Server Management Studio. The damnedest thing is that the trigger itself is executing correctly and there are no errors when it is executed (just says 'execution successful').
The code is as follows (it's a work in progress.... just getting my self familiar):
USE TestDb
IF EXISTS (SELECT name FROM sysobjects
WHERE name = 'OfficeSalesQuotaUpdate' AND type = 'TR')
DROP TRIGGER OfficeSalesQuotaUpdate
GO
CREATE TRIGGER OfficeSalesQuotaUpdate
ON SalesReps
AFTER UPDATE, DELETE, INSERT
AS
DECLARE #sales_difference int, #quota_difference int
DECLARE #sales_original int, #quota_original int
DECLARE #sales_new int, #quota_new int
DECLARE #officeid int
DECLARE #salesrepid int
--UPDATE(Sales) returns true for INSERT and UPDATE.
--Not for DELETE though.
IF ((SELECT COUNT(*) FROM inserted) = 0)
SET #salesrepid = (SELECT SalesRep FROM deleted)
ELSE
SET #salesrepid = (SELECT SalesRep FROM inserted)
--If you address the #salesrepid variable, it does not work. Doesn't even
--print out the 'this should work line.
PRINT 'This should work...' --+ convert(char(30), #salesrepid)
IF (#salesrepid = NULL)
PRINT 'SalesRepId is null'
ELSE
PRINT 'SalesRepId is not null'
PRINT convert(char(50), #salesrepid)
SET #officeid = (SELECT RepOffice
FROM SalesReps
WHERE SalesRep = #salesrepid)
SELECT #sales_original = (SELECT Sales FROM deleted)
SELECT #sales_new = (SELECT Sales FROM inserted)
--Sales can not be null, so we'll remove this later.
--Use this as a template for quota though, since that can be null.
IF (#sales_new = null)
BEGIN
SET #sales_new = 0
END
IF (#sales_original = 0)
BEGIN
SET #sales_original = 0
END
SET #sales_difference = #sales_new - #sales_original
UPDATE Offices
SET Sales = Sales + #sales_difference
WHERE Offices.Office = #officeid
GO
So, any tips? I've completely stumped on this one. Thanks in advance.
Your main problem seems to be that there is a difference between #foo = NULL and #foo IS NULL:
declare #i int
set #i = null -- redundant, but explicit
if #i = null print 'equals'
if #i is null print 'is'
The 'This should work' PRINT statement doesn't work because concatenating a NULL with a string gives a NULL, and PRINT NULL doesn't print anything.
As for actually setting the value of #salerepid, it seems most likely that the inserted and/or deleted table is in fact empty. What statements are you using to test the trigger? And have you printed out the COUNT(*) value?
You should also consider (if you haven't already) what happens if someone changes more than one row at once. Your current code assumes that only one row is changed at a time, which may be a reasonable assumption in your environment, but it can easily break if someone bulk loads data or does other 'batch processing'.
Finally, you should always mention your MSSQL version and edition; it can be relevant for some syntax questions.
You should replace the body of the trigger with something like this:
;WITH Totals AS (
SELECT RepOffice,SUM(Sales) as Sales FROM inserted GROUP BY RepOffice
UNION ALL
SELECT RepOffice,-SUM(Sales) FROM deleted GROUP BY RepOffice
), SalesDelta AS (
SELECT RepOffice,SUM(Sales) as Delta FROM Totals GROUP BY RepOffice
)
UPDATE o
SET Sales = Sales + sd.Delta
FROM
Offices o
inner join
SalesDelta sd
on
o.Office = sd.RepOffice
This will adequately cope with multiple rows in inserted and deleted. I'm assuming SalesRep is the primary key of the SalesReps table.
Updated above, to cope with UPDATE changing the RepOffice of a particular Sales Rep (which the original doesn't, presumable, get correct either)
Just a suggestion...have you tried putting BEGIN and END to encapsulate the 'AS' part of your trigger?

Getting Null value Of variable in sql server

Strange situation
In a trigger i assign a column value to variable but gives exception while inserting into other table using that variable.
e.g
select #srNO=A.SrNo from A where id=123;
insert into B (SRNO) values (#srNo) // here it gives null
I run above select query in query pane it works fine but in trigger it gives me null
any suggestions
ALTER PROCEDURE ProcessData
#Id decimal(38,0),
#XMLString varchar(1000),
#Phone varchar(20)
AS
DECLARE
#idoc int,
#iError int,
#Serial varchar(15),
#PhoneNumber varchar(15),
BEGIN
COMMIT TRAN
EXEC sp_xml_preparedocument #idoc OUTPUT,#XMLString<br/>
SELECT #iError = ##Error<br/>
IF #iError = 0<br/>
BEGIN<br/>
SELECT #Serial = convert(text,[text]) FROM OPENXML (#idoc,'',1) where nodetype = 3 and ParentId = 2
IF #Serial=Valid <br/>
BEGIN<br/>
BEGIN TRAN INVALID<br/>
begin try <br/>
Declare #phoneId decimal(38,0);<br/>
SELECT #phoneId = B.phoneId FROM A
INNER JOIN B ON A.Id = B.Id WHERE A.PhoneNumber like '%'+#SenderPhone + '%'<br/>
print #phoneId ; //gives null<br/>
end try<br/>
begin catch<br/>
print Error_Message();<br/>
end catch<br/>
you should work with sets of rows in triggers, so if multiple rows are affected your code handles all rows. This will only INSERT when the value is not null:
INSERT INTO B (SRNO)
SELECT A.SrNo FROM A where id=123 AND A.SrNo IS NOT NULL
Neo, are you sure, that SELECT SrNo FROM A WHERE id = 123 returns data?
I mean, value of #srNo will not change (therefore, remain NULL) if there no records with id = 123
When you eliminate the impossible, whatever remains, however improbable, must be the truth.
The obvious answer is that at the time the trigger fires, SrNo is null or Id 123 does not exist. Is this for an insert trigger and is it the case that you are trying to take something that was just inserted into table A and push it into table B? If so, you should query against the inserted table:
//from an insert trigger on the table `A`
Insert B( SRNO )
Select SRNO
From inserted
Where Id = 123
If this is not the case, then we'd need to see the details of the Trigger itself.
solved it there is some error in xml string reading function
e.g in openxml pattern matching
Thanks all of you for help... :)

Resources