SQL Server 2008 Running trigger after Insert, Update locks original table - sql-server

I have a serious performance problem.
I have a database with (related to this problem), 2 tables.
1 Table contains strings with some global information. The second table contains the string stripped down to each individual word. So the string is like indexed in the second table, word by word.
The validity of the data in the second table is of less important then the validity of the data in the first table.
Since the first table can grow like towards 1*10^6 records and the second table having an average of like 10 words for 1 string can grow like 1*10^7 records, i use a nolock in order to read the second this leaves me free for inserting new records without locking it (Expect many reads on both tables).
I have a script which keeps on adding and updating rows to the first table in a MERGE statement. On average, the data beeing merged are like 20 strings a time and the scripts runs like ones every 5 seconds.
On the first table, i have a trigger which is beeing invoked on a Insert or Update, which takes the newly inserted or updated data and calls a stored procedure on it which makes sure the data is indexed in the second table. (This takes some significant time).
The problem is that when having the trigger disbaled, Reading the first table happens in a few ms. However, when enabling the trigger and your in bad luck of trying to read the first table while this is beeing updated, Our webserver gives you a timeout after 10 seconds (which is way to long anyways).
I can quess from this part that when running the trigger, the first table is kept (partially) in a lock untill the trigger is completed.
What do you think, if i'm right, is there a easy way around this?
Thanks in advance!
As requested:
ALTER TRIGGER [dbo].[OnFeedItemsChanged]
ON [dbo].[FeedItems]
AFTER INSERT,UPDATE
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
DECLARE #id int;
SELECT #id = ID FROM INSERTED;
IF #id IS NOT NULL
BEGIN
DECLARE #title nvarchar(MAX);
SELECT #title = Title FROM INSERTED;
DECLARE #description nvarchar(MAX);
SELECT #description = [Description] FROM INSERTED;
SELECT #title = dbo.RemoveNonAlphaCharacters(#title)
SELECT #description = dbo.RemoveNonAlphaCharacters(#description)
-- Insert statements for trigger here
EXEC dbo.usp_index_itemstring #id, #title;
EXEC dbo.usp_index_itemstring #id, #description;
END
END
The FeedItems table is populated by this query:
MERGE INTO FeedItems i
USING #newitems d ON i.Service = d.Service AND i.GUID = d.GUID
WHEN matched THEN UPDATE
SET i.Title = d.Title,
i.Description = d.Description,
i.Uri = d.Uri,
i.Readers = d.Readers
WHEN NOT matched THEN INSERT
(Service, Title, Uri, GUID, Description, Readers)
VALUES
(d.Service, d.Title, d.Uri, d.GUID, d.Description, d.Readers);
The sproc: IndexItemStrings is populating the second table, executing this proc does indeed take his time. The problem is that while executing this trigger. Queries applied to the FeedItems table are mostly timing out (even those queries who dont uses the second table)
First table:
USE [ICI]
GO
/****** Object: Table [dbo].[FeedItems] Script Date: 04/09/2010 15:03:31 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[FeedItems](
[ID] [int] IDENTITY(1,1) NOT NULL,
[Service] [int] NOT NULL,
[Title] [nvarchar](max) NULL,
[Uri] [nvarchar](max) NULL,
[Description] [nvarchar](max) NULL,
[GUID] [nvarchar](255) NULL,
[Inserted] [smalldatetime] NOT NULL,
[Readers] [int] NOT NULL,
CONSTRAINT [PK_FeedItems] PRIMARY KEY CLUSTERED
(
[ID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[FeedItems] WITH CHECK ADD CONSTRAINT [FK_FeedItems_FeedServices] FOREIGN KEY([Service])
REFERENCES [dbo].[FeedServices] ([ID])
ON DELETE CASCADE
GO
ALTER TABLE [dbo].[FeedItems] CHECK CONSTRAINT [FK_FeedItems_FeedServices]
GO
ALTER TABLE [dbo].[FeedItems] ADD CONSTRAINT [DF_FeedItems_Inserted] DEFAULT (getdate()) FOR [Inserted]
GO
Second table:
USE [ICI]
GO
/****** Object: Table [dbo].[FeedItemPhrases] Script Date: 04/09/2010 15:04:47 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[FeedItemPhrases](
[FeedItem] [int] NOT NULL,
[Phrase] [int] NOT NULL,
[Count] [smallint] NOT NULL,
CONSTRAINT [PK_FeedItemPhrases] PRIMARY KEY CLUSTERED
(
[FeedItem] ASC,
[Phrase] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[FeedItemPhrases] WITH CHECK ADD CONSTRAINT [FK_FeedItemPhrases_FeedItems] FOREIGN KEY([FeedItem])
REFERENCES [dbo].[FeedItems] ([ID])
ON UPDATE CASCADE
ON DELETE CASCADE
GO
ALTER TABLE [dbo].[FeedItemPhrases] CHECK CONSTRAINT [FK_FeedItemPhrases_FeedItems]
GO
ALTER TABLE [dbo].[FeedItemPhrases] WITH CHECK ADD CONSTRAINT [FK_FeedItemPhrases_Phrases] FOREIGN KEY([Phrase])
REFERENCES [dbo].[Phrases] ([ID])
ON UPDATE CASCADE
ON DELETE CASCADE
GO
ALTER TABLE [dbo].[FeedItemPhrases] CHECK CONSTRAINT [FK_FeedItemPhrases_Phrases]
GO
And more:
ALTER PROCEDURE [dbo].[usp_index_itemstring]
-- Add the parameters for the stored procedure here
#item int,
#text nvarchar(MAX)
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
-- DECLARE a table containing all words within the text
DECLARE #tempPhrases TABLE
(
[Index] int,
[Phrase] NVARCHAR(256)
);
-- extract each word from text and store it in the temp table
WITH Pieces(pn, start, [stop]) AS
(
SELECT 1, 1, CHARINDEX(' ', #text)
UNION ALL
SELECT pn + 1, CAST([stop] + 1 AS INT), CHARINDEX(' ', #text, [stop] + 1)
FROM Pieces
WHERE [stop] > 0
)
INSERT INTO #tempPhrases
SELECT pn, SUBSTRING(#text, start, CASE WHEN [stop] > 0 THEN [stop]-start ELSE LEN(#text) END) AS s
FROM Pieces
OPTION (MAXRECURSION 0);
WITH CombinedPhrases ([Phrase]) AS
(
-- SELECT ALL 2-WORD COMBINATIONS
SELECT w1.[Phrase] + ' ' + w2.[Phrase]
FROM #tempPhrases w1
JOIN #tempPhrases w2 ON w1.[Index] + 1 = w2.[Index]
UNION ALL -- SELECT ALL 3-WORD COMBINATIONS
SELECT w1.[Phrase] + ' ' + w2.[Phrase] + ' ' + w3.[Phrase]
FROM #tempPhrases w1
JOIN #tempPhrases w2 ON w1.[Index] + 1 = w2.[Index]
JOIN #tempPhrases w3 ON w1.[Index] + 2 = w3.[Index]
UNION ALL -- SELECT ALL 4-WORD COMBINATIONS
SELECT w1.[Phrase] + ' ' + w2.[Phrase] + ' ' + w3.[Phrase] + ' ' + w4.[Phrase]
FROM #tempPhrases w1
JOIN #tempPhrases w2 ON w1.[Index] + 1 = w2.[Index]
JOIN #tempPhrases w3 ON w1.[Index] + 2 = w3.[Index]
JOIN #tempPhrases w4 ON w1.[Index] + 3 = w4.[Index]
)
-- ONLY INSERT THE NEW PHRASES IN THE Phrase TABLE
INSERT INTO #tempPhrases
SELECT 0, [Phrase] FROM CombinedPhrases
-- DELETE PHRASES WHICH ARE EXCLUDED
DELETE FROM #tempPhrases
WHERE [Phrase] IN
(
SELECT [Text] FROM Phrases p
JOIN ExcludedPhrases ex
ON ex.ID = p.ID
);
MERGE INTO Phrases p
USING
(
SELECT DISTINCT Phrase FROM #tempPhrases
) t
ON p.[Text] = t.Phrase
WHEN NOT MATCHED THEN
INSERT VALUES (t.Phrase);
-- Finally create relations between the phrases and feeditem,
MERGE INTO FeedItemPhrases p
USING
(
SELECT #item as [Item], MIN(p.[ID]) as Phrase, COUNT(t.[Phrase]) as [Count]
FROM Phrases p WITH (NOLOCK)
JOIN #tempPhrases t ON p.[Text] = t.[Phrase]
GROUP BY t.[Phrase]
) t
ON p.FeedItem = t.Item
AND p.Phrase = t.Phrase
WHEN MATCHED THEN
UPDATE SET p.[Count] = t.[Count]
WHEN NOT MATCHED THEN
INSERT VALUES (t.[Item], t.Phrase, t.[Count]);
END
and more:
ALTER Function [dbo].[RemoveNonAlphaCharacters](#Temp NVarChar(max))
Returns NVarChar(max)
AS
Begin
SELECT #Temp = REPLACE (#Temp, '%20', ' ');
While PatIndex('%[^a-z ]%', #Temp) > 0
Set #Temp = Stuff(#Temp, PatIndex('%[^a-z ]%', #Temp), 1, '')
Return #TEmp
End

I looked around on the internet, and I couldn't find any way of making the trigger happen without claiming a lock. Therefore I choose to do the inserts via a stored procedure, which in turn performs the logic previously found in the trigger. This allowed me to execute the content of the trigger in a transaction AFTER the actual data was inserted and the insertion lock was lifted.
Hope this helps!

Related

Stored procedure in that needs to return unique values in a custom format but seems to return duplicates

I have a stored procedure in Microsoft SQL Server that should return unique values based on a custom format: SSSSTT99999 where SSSS and TT is based on a parameter and 99999 is a unique sequence based on the values of SSSS and TT. I need to store the last sequence based on SSSS and TT on table so I can retrieve the next sequence the next time. The problem with this code is that in a multi-user environment, at least two simultaneous calls may generate the same value. How can I make sure that each call to this stored procedure gets a unique value?
CREATE PROCEDURE GenRef
#TT nvarchar(30),
#SSSS nvarchar(50)
AS
declare #curseq as integer
set #curseq=(select sequence from DocSequence where
docsequence.TT=#TT and
DocSequence.SSSS=#SSSS)
if #curseq is null
begin
set #curseq=1
insert docsequence (id,TT,SSSS,sequence) values
(newid(),#TT,#SSSS,1)
end
else
begin
update DocSequence set Sequence=#curseq+1 where
docsequence.TT=#TT and
DocSequence.SSSS=#SSSS
end
declare #curtr varchar(30)
set #curtr=RIGHT('0000' + #SSSS,4)
+ #TT
+ RIGHT('00000' + #curseq,5)
select #curtr
GO
updated code with transactions:
ALTER PROCEDURE [dbo].[GenTRNum]
#TRType nvarchar(50),
#BranchCode nvarchar(50)
AS
declare #curseq as integer
SET TRANSACTION ISOLATION LEVEL READ COMMITTED;
begin transaction
if not exists (select top 1 sequence from DocSequence where
docsequence.DocType=#trtype and
DocSequence.BranchCode=#BranchCode)
begin
insert docsequence (id,doctype,sequence,branchcode) values
(newid(),#trtype,1,#BranchCode)
end
else
begin
update DocSequence set Sequence=sequence+1 where
docsequence.DocType=#trtype and
DocSequence.BranchCode=#BranchCode
end
commit
set #curseq=(select top 1 sequence from DocSequence where
docsequence.DocType=#trtype and
DocSequence.BranchCode=#BranchCode)
declare #curtr varchar(30)
set #curtr=RIGHT('0000' + #BranchCode,4)
+ #TRType
+ RIGHT('00000' + convert(varchar(5),#curseq),5)
select #curtr
You can handle this on application level by using threading assuming you have single application Server.
Suppose you have method GetUniqueVlaue Which Executes this SP.
What you should do is use threading. that method use database transactions with readcommited. Now for example if two users have made the call to GetUniqueVlaue method at exactly 2019-08-30 10:59:38.173 time your application will make threads and Each thread will try to open transaction. only one will open that transaction on that SP and other will go on wait.
Here is how I would solve this task:
Table structure, unique indexes are important
--DROP TABLE IF EXISTS dbo.DocSequence;
CREATE TABLE dbo.DocSequence (
RowID INT NOT NULL IDENTITY(1,1)
CONSTRAINT PK_DocSequence PRIMARY KEY CLUSTERED,
BranchCode CHAR(4) NOT NULL,
DocType CHAR(2) NOT NULL,
SequenceID INT NOT NULL
CONSTRAINT DF_DocSequence_SequenceID DEFAULT(1)
CONSTRAINT CH_DocSequence_SequenceID CHECK (SequenceID BETWEEN 1 AND 999999),
)
CREATE UNIQUE INDEX UQ_DocSequence_BranchCode_DocType
ON dbo.DocSequence (BranchCode,DocType) INCLUDE(SequenceID);
GO
Procedure:
CREATE OR ALTER PROCEDURE dbo.GenTRNum
#BranchCode VARCHAR(4),
#DocType VARCHAR(2),
--
#curseq INT = NULL OUTPUT,
#curtr VARCHAR(30) = NULL OUTPUT
AS
SELECT #curseq = NULL,
#curtr = NULL,
#BranchCode = RIGHT(CONCAT('0000',#BranchCode),4),
#DocType = RIGHT(CONCAT('00',#DocType),2)
-- Atomic operation, no transaction needed
UPDATE dbo.DocSequence
SET #curseq = SequenceID += 1
WHERE DocType = #DocType
AND BranchCode = #BranchCode;
IF #curseq IS NULL -- Not found, create new one
BEGIN
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
BEGIN TRAN
INSERT dbo.docsequence (doctype,branchcode)
SELECT #DocType, #BranchCode
WHERE NOT EXISTS(SELECT 1 FROM dbo.DocSequence WHERE DocType = #DocType AND BranchCode = #BranchCode)
IF ##ROWCOUNT = 1
BEGIN
COMMIT;
SET #curseq = 1
END
ELSE
BEGIN
ROLLBACK;
UPDATE dbo.DocSequence
SET #curseq = SequenceID += 1
WHERE DocType = #DocType
AND BranchCode = #BranchCode;
END
END
SET #curtr = #BranchCode + #DocType + RIGHT(CONCAT('00000',#curseq),5)
RETURN
GO
I did some tests to make sure is it works as described. You can use it if you need
-- Log table just for test
-- DROP TABLE IF EXISTS dbo.GenTRNumLog;
CREATE TABLE dbo.GenTRNumLog(
RowID INT NOT NULL IDENTITY(1,1) PRIMARY KEY CLUSTERED,
SPID SMALLINT NOT NULL,
Cycle INT NULL,
dt DATETIME NULL,
sq INT NULL,
tr VARCHAR(30) NULL,
DurationMS INT NULL
)
This script should be opened in several separate MS SQL Management Studio windows and run they almost simultaneously
-- Competitive insertion test, run it in 5 threads simultaneously
SET NOCOUNT ON
DECLARE
#dt DATETIME,
#start DATETIME,
#DurationMS INT,
#Cycle INT,
#BC VARCHAR(4),
#DC VARCHAR(2),
#SQ INT,
#TR VARCHAR(30);
SELECT #Cycle = 0,
#start = GETDATE();
WHILE DATEADD(SECOND, 60, #start) > GETDATE() -- one minute test, new #DocType every second
BEGIN
SET #dt = GETDATE();
SELECT #BC = FORMAT(#dt,'HHmm'), -- Hours + Minuts as #BranchCode
#DC = FORMAT(#dt,'ss'), -- seconds as #DocType
#Cycle += 1
EXEC dbo.GenTRNum #BranchCode = #BC, #DocType = #Dc, #curseq = #SQ OUTPUT, #curtr = #TR OUTPUT
SET #DurationMS = DATEDIFF(ms, #dt, GETDATE());
INSERT INTO dbo.GenTRNumLog (SPID, Cycle , dt, sq, tr, DurationMS)
SELECT SPID = ##SPID, Cycle = #Cycle, dt = #dt, sq = #SQ, tr = #TR, DurationMS = #DurationMS
END
/*
Check test results
SELECT *
FROM dbo.DocSequence
SELECT sq = MAX(sq), DurationMS = MAX(DurationMS)
FROM dbo.GenTRNumLog
SELECT * FROM dbo.GenTRNumLog
ORDER BY tr
*/

SQL Server trigger(s) to maintain one IsPrimary/IsDefault row per FK

We have a few tables with an IsPrimary column, e.g. many members belonging to an account. The requirement is if an account has one or more members, one and only one of them must have IsPrimary = 1. We want to achieve this using triggers for both maximum assurance of data integrity and ease of use from applications. But due to the batch nature of triggers, I'm struggling to accomplish it and in the most efficient way.
So far I have an insert/delete trigger (see below) that handles inserting a new primary record or deleting the primary record. Where I got stuck is ensuring the first record inserted has IsPrimary=1 and then realizing there could be multiple modifications to the same account in the batch...
Does anyone have any experience or and example with something like this?
ALTER TRIGGER dbo.trg_PrimaryTest_InsertDelete
ON dbo.PrimaryTest
AFTER INSERT,DELETE
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
--SET NOCOUNT ON;
PRINT 'executing trigger'
--If inserting a primary, set all others to 0
UPDATE PrimaryTest
SET IsPrimary = 0
FROM inserted
INNER JOIN PrimaryTest ON inserted.fk_ID = PrimaryTest.fk_ID
WHERE inserted.IsPrimary = 1
AND PrimaryTest.pk_ID <> inserted.pk_ID
AND PrimaryTest.IsPrimary = 1
--If deleting the primary, set most recent remaining phone to 1
UPDATE PrimaryTest
SET IsPrimary = 1
WHERE PrimaryTest.pk_ID IN (
SELECT TOP 1 PrimaryTest.pk_ID
FROM deleted
INNER JOIN PrimaryTest ON deleted.fk_ID = PrimaryTest.fk_ID
WHERE deleted.IsPrimary = 1
ORDER BY PrimaryTest.CreatedDate DESC
)
PRINT 'trigger executed'
END
GO
Table ddl:
CREATE TABLE [dbo].[PrimaryTest](
[pk_ID] [uniqueidentifier] NOT NULL,
[Value] [nvarchar](50) NULL,
[CreatedDate] [datetime2](7) NOT NULL,
[IsPrimary] [bit] NOT NULL,
[fk_ID] [int] NOT NULL,
CONSTRAINT [PK_PrimaryTest] PRIMARY KEY CLUSTERED
(
[pk_ID] ASC
)
GO
EDIT:
I think this might work or is at least headed in the right direction. I think it may be reading more records than it needs to in the 2nd C.T.E. case though. (Note I added update)
ALTER TRIGGER dbo.trg_PrimaryTest_InsertDelete
ON dbo.PrimaryTest
AFTER INSERT, UPDATE, DELETE
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
--SET NOCOUNT ON;
PRINT 'executing trigger'
--If setting a new primary, set all others to 0
UPDATE PrimaryTest
SET IsPrimary = 0
FROM inserted
INNER JOIN PrimaryTest ON inserted.fk_ID = PrimaryTest.fk_ID
WHERE inserted.IsPrimary = 1
AND PrimaryTest.pk_ID <> inserted.pk_ID
AND PrimaryTest.IsPrimary = 1
--Set IsPrimary on any modified sets left without an primary
;WITH cte
AS
(
SELECT *, ROW_NUMBER() OVER(PARTITION BY fk_ID ORDER BY CreatedDate DESC) as RowNum
FROM PrimaryTest p1
WHERE fk_ID IN (SELECT FK_ID FROM inserted UNION SELECT FK_ID FROM deleted) --Only look at modified sets
AND NOT EXISTS (SELECT NULL FROM PrimaryTest p2 WHERE p2.fk_ID = p1.fk_ID AND p2.IsPrimary = 1) --Select rows in a set without an IsPrimary=1 record
)
UPDATE cte
SET IsPrimary = 1
WHERE RowNum = 1
PRINT 'trigger executed'
END
GO

How to speedUp Random selecting in SQL Server

I have a table for phone numbers like this :
ID PhoneNumber Enabled GrupID CountryID
----------- -------------------- ------- ------ -----------
10444 ***001000999 1 NULL 1
10445 ***001000998 1 NULL 1
10446 ***001000994 1 NULL 1
10447 ***001000990 1 NULL 1
10448 ***001000989 1 NULL 1
This table has 68992507 rows.
I want to select some random phone number from it.
I can get my random number query by this stored procedure:
here I select random numbers, insert to a #table and then update the selected numbers .
CREATE proc [dbo].[Mysp_GetRandom]
#countryid int,
#count int
as
declare #tbl table([ID] [int] ,
[PhoneNumber] [nchar](20) NOT NULL,
[Enabled] [bit] NULL,
[GrupID] [tinyint] NULL,
[CountryID] [int] NULL)
INSERT INTO #tbl
SELECT TOP (#count) *
FROM tblPhoneNumber
WHERE CountryID = #countryid
AND GrupID is null
ORDER BY binary_checksum(ID * rand())
UPDATE tblPhoneNumber
SET GrupID = 1
WHERE ID IN (SELECT ID FROM #tbl)
SELECT * FROM #tbl
The problem is that it takes a long time for the query to run. For example this query takes 12:30 minutes ...
DECLARE #return_value int
EXEC #return_value = [dbo].[Mysp_GetRandom]
#countryid = 14, #count = 3
SELECT 'Return Value' = #return_value
and I have an ndex on this table :
CREATE NONCLUSTERED INDEX [NonClusteredIndex-20150415-172433]
ON [dbo].[tblPhoneNumber] ([CountryID] ASC)
WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF,
SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF,
ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
GO
Execution plan is as below :
Thanks ...
Add grupID to index key column and add other required columns in include clause of your NC index NonClusteredIndex-20150415-172433.
Execution plan is already giving you the same hint on adding missing index.
P.S Mark it as answer if it helped you.
Look at your query plan, the first INSERT statement takes almost 100% of the time, and 70% of it is sorting. There's not much you can do with it since you already using BINARY_CHECKSUM. May be the table was filled in random enough manner to get off with taking consecutive rows starting from random offset, like this:
SELECT ID FROM tblPhoneNumber WHERE CountryID = #countryid
AND GrupID is null ORDER BY ID OFFSET CONVERT(int,
rand()*(select count(*) from tblPhoneNumber)-#count-1)
ROWS FETCH NEXT #count ROWS ONLY
You should replace your order by clause.
You could create fairly random ids:
declare #count int = 100
; with ids(id, hex) as (
Select 1, convert(bigint, convert(varbinary, '0x'+right(newid(), 6), 1 ))
Union all
Select id+1, convert(bigint, convert(varbinary, '0x'+right(newid(), 6), 1 ))
From ids
Where id+1 <= #count
)
Select * from ids
Option (MAXRECURSION 0)
Then you can join it table with your table on ID.
You should review your indexes (mentionned by others) and add an index on phone Id.

Need to speed up SQL Server SP that uses system metadata

Let me apologize in advance for the length of this question. I don't see how to ask it without giving all the definitions.
I've inherited a SQL Server 2005 database that includes a homegrown implementation of change tracking. Through triggers, changes to virtually every field in the database are stored in a set of three tables. In the application for this database, the user can request the history of various items, and what's returned is not just changes to the item itself, but also changes in related tables. The problem is that in some cases, it's painfully slow, and in some cases, the request eventually crashes the application. The client has also reported other users having problems when someone requests history.
The tables that store the change data are as follows:
CREATE TABLE [dbo].[tblSYSChangeHistory](
[id] [bigint] IDENTITY(1,1) NOT NULL,
[date] [datetime] NULL,
[obj_id] [int] NULL,
[uid] [varchar](50) NULL
This table tracks the tables that have been changed. Obj_id is the value that Object_ID() returns.
CREATE TABLE [dbo].[tblSYSChangeHistory_Items](
[id] [bigint] IDENTITY(1,1) NOT NULL,
[h_id] [bigint] NOT NULL,
[item_id] [int] NULL,
[action] [tinyint] NULL
This table tracks the items that have been changed. h_id is a foreign key to tblSYSChangeHistory. item_id is the PK of the changed item in the specified table. action indicates insert, delete or change.
CREATE TABLE [dbo].[tblSYSChangeHistory_Details](
[id] [bigint] IDENTITY(1,1) NOT NULL,
[i_id] [bigint] NOT NULL,
[col_id] [int] NOT NULL,
[prev_val] [varchar](max) NULL,
[new_val] [varchar](max) NULL
This table tracks the individual changes. i_id is a foreign key to tblSYSChangeHistory_Items. col_id indicates which column was changed, and prev_val and new_val indicate the original and new values for that field.
There's actually a fourth table that supports this architecture. tblSYSChangeHistory_Objects maps plain English descriptions of operations to particular tables in the database.
The code to look up the history for an item is incredibly convoluted. It's one branch of a very long SP. Relevant parameters are as follows:
#action varchar(50),
#obj_id bigint = 0,
#uid varchar(50) = '',
#prev_val varchar(MAX) = '',
#new_val varchar(MAX) = '',
#start_date datetime = '',
#end_date datetime = ''
I'm storing them to local variables right away (because I was able to significantly speed up another SP by doing so):
declare #iObj_id bigint,
#cUID varchar(50),
#cPrev_val varchar(max),
#cNew_val varchar(max),
#tStart_date datetime,
#tEnd_date datetime
set #iObj_id = #obj_id
set #cUID = #uid
set #cPrev_val = #prev_val
set #cNew_val = #new_val
set #tStart_date = #start_date
set #tEnd_date = #end_date
And here's the code from that branch of the SP:
create table #r (obj_id int, item_id int, l tinyint)
create clustered index #ri on #r (obj_id, item_id)
insert into #r
select object_id(obj_name), #iObj_id, 0
from dbo.tblSYSChangeHistory_Objects
where obj_type = 'U' and descr = cast(#cPrev_val AS varchar(150))
declare #i tinyint, #cnt int
set #i = 1
while #i <= 4
begin
insert into #r
select obj_id, item_id, #i
from dbo.vSYSChangeHistoryFK a with (nolock)
where exists (select null from #r where obj_id = a.rel_obj_id and item_id = a.rel_item_id and l = #i - 1)
and not exists (select null from #r where obj_id = a.obj_id and item_id = a.item_id)
set #cnt = ##rowcount
insert into #r
select rel_obj_id, rel_item_id, #i
from dbo.vSYSChangeHistoryFK a with (nolock)
where object_name(obj_id) not in (<this is a list of particular tables in the database>)
and exists (select null from #r where obj_id = a.obj_id and item_id = a.item_id and l between #i - 1 and #i)
and not exists (select null from #r where obj_id = a.rel_obj_id and item_id = a.rel_item_id)
set #i = case #cnt + ##rowcount when 0 then 100 else #i + 1 end
end
select date, obj_name, item, [uid], [action],
pkey, item_id, id, key_obj_id into #tCH_R
from dbo.vSYSChangeHistory a with (nolock)
where exists (select null from #r where obj_id = a.obj_id and item_id = a.item_id)
and (#cUID = '' or uid = #cUID)
and (#cNew_val = '' or [action] = #cNew_val)
declare ch_item_cursor cursor for
select distinct pkey, key_obj_id, item_id
from #tCH_R
where item = '' and pkey <> ''
open ch_item_cursor
fetch next from ch_item_cursor
into #cPrev_val, #iObj_id, #iCol_id
while ##fetch_status = 0
begin
set #SQLStr = 'select #val = ' + #cPrev_val +
' from ' + object_name(#iObj_id) + ' with (nolock)' +
' where id = #id'
exec sp_executesql #SQLStr,
N'#val varchar(max) output, #id int',
#cNew_val output, #iCol_id
update #tCH_R
set item = #cNew_val
where key_obj_id = #iObj_id
and item_id = #iCol_id
fetch next from ch_item_cursor
into #cPrev_val, #iObj_id, #iCol_id
end
close ch_item_cursor
deallocate ch_item_cursor
select date, obj_name,
cast(item AS varchar(254)) AS item,
uid, [action],
cast(id AS int) AS id
from #tCH_R
order by id
return
As you can see, the code uses a view. Here's that definition:
ALTER VIEW [dbo].[vSYSChangeHistoryFK]
AS
SELECT i.obj_id, i.item_id, c1.parent_object_id AS rel_obj_id, i2.item_id AS rel_item_id
FROM dbo.vSYSChangeHistoryItemsD AS i INNER JOIN
sys.foreign_key_columns AS c1 ON c1.referenced_object_id = i.obj_id AND c1.constraint_column_id = 1 INNER JOIN
dbo.vSYSChangeHistoryItemsD AS i2 ON c1.parent_object_id = i2.obj_id INNER JOIN
dbo.tblSYSChangeHistory_Details AS d1 ON d1.i_id = i.min_id AND d1.col_id = c1.referenced_column_id INNER JOIN
dbo.tblSYSChangeHistory_Details AS d1k ON d1k.i_id = i2.min_id AND d1k.col_id = c1.parent_column_id AND ISNULL(d1.new_val,
ISNULL(d1.prev_val, '')) = ISNULL(d1k.new_val, ISNULL(d1k.prev_val, '')) --LEFT OUTER JOIN
UNION ALL
SELECT i0.obj_id, i0.item_id, c01.parent_object_id AS rel_obj_id, i02.item_id AS rel_item_id
FROM dbo.vSYSChangeHistoryItemsD AS i0 INNER JOIN
sys.foreign_key_columns AS c01 ON c01.referenced_object_id = i0.obj_id AND c01.constraint_column_id = 1 AND col_name(c01.referenced_object_id,
c01.referenced_column_id) = 'ID' INNER JOIN
dbo.vSYSChangeHistoryItemsD AS i02 ON c01.parent_object_id = i02.obj_id INNER JOIN
dbo.tblSYSChangeHistory_Details AS d01k ON i02.min_id = d01k.i_id AND d01k.col_id = c01.parent_column_id AND ISNULL(d01k.new_val,
d01k.prev_val) = CAST(i0.item_id AS varchar(max))
And finally, that view uses one more view:
ALTER VIEW [dbo].[vSYSChangeHistoryItemsD]
AS
SELECT h.obj_id, m.item_id, MIN(m.id) AS min_id
FROM dbo.tblSYSChangeHistory AS h INNER JOIN
dbo.tblSYSChangeHistory_Items AS m ON h.id = m.h_id
GROUP BY h.obj_id, m.item_id
Working with the Profiler, it appears that view vSYSChangeHistoryFK is the big culprit, and my testing suggests that the particular problem is in the join between the two copies of vSYSChangeHistoryItemsD and the foreign_key_columns table.
I'm looking for any ideas on how to give acceptable performance here. The client reports sometimes waiting as much as 15 minutes without getting results. I've tested up to nearly 10 minutes with no result in at least one case.
If there were new language elements in 2008 or later that would solve this, I think the client would be willing to upgrade.
Thanks.
Wow that's a mess. Your big gain should be in removing the cursor. I see 'where exists' - that's nice and efficient b/c as soon as it finds one match it aborts. And I see 'where not exists' - by definition that has to scan everything. Is it finding the top 4? You can do better with using ROW_NUMBER() OVER (PARTITON BY [whatever makes it unique] ORDER BY [whatever your id is]. It's hard to tell. select object_id(obj_name), #iObj_id, 0 makes it seem like only the #i=1 loop actually does anything (?)
If that is what it's doing, you could write it as
SELECT * from
(
select ROW_NUMBER() OVER (PARTITION BY obj_id ORDER BY item_id desc) as Row,
obj_id, item_id
FROM bo.vSYSChangeHistoryFK a with (nolock)
where obj_type = 'U' and descr = cast(#cPrev_val AS varchar(150))
) paged
where Row between 1 and 4
ORDER BY Row
A DBA level change that could help would be to set up a partitioning scheme based on date. Roll over to a new partition every so often. Put the old partitions on different disks. Most queries may only need to hit the recent partition, which will be say 1/5th the size that it used to be, making it much faster without changing anything else.
Not a full answer, sorry. That mess would take hours to parse

SQL-Server Trigger on update for Audit

I can't find an easy/generic way to register to an audit table the columns changed on some tables.
I tried to do it using a Trigger on after update in this way:
First of all the Audit Table definition:
CREATE TABLE [Audit](
[Id] [int] IDENTITY(1,1) NOT NULL,
[Date] [datetime] NOT NULL default GETDATE(),
[IdTypeAudit] [int] NOT NULL, --2 for Modify
[UserName] [varchar](50) NULL,
[TableName] [varchar](50) NOT NULL,
[ColumnName] [varchar](50) NULL,
[OldData] [varchar](50) NULL,
[NewData] [varchar](50) NULL )
Next a trigger on AFTER UPDATE in any table:
DECLARE
#sql varchar(8000),
#col int,
#colcount int
select #colcount = count(*) from INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = 'MyTable'
set #col = 1
while(#col < #colcount )
begin
set #sql=
'INSERT INTO Audit
SELECT 2, UserNameLastModif, ''MyTable'', COL_NAME(Object_id(''MyTable''), '+ convert(varchar,#col) +'), Deleted.'
+ COL_NAME(Object_id('MyTable'), #col) + ', Inserted.' + COL_NAME(Object_id('MyTable'), #col) + '
FROM Inserted LEFT JOIN Deleted ON Inserted.[MyTableId] = Deleted.[MyTableId]
WHERE COALESCE(Deleted.' + COL_NAME(Object_id('MyTable'), #col) + ', '''') <> COALESCE(Inserted.' + COL_NAME(Object_id('MyTable'), #col) + ', '''')'
--UserNameLastModif is an optional column on MyTable
exec(#sql)
set #col = #col + 1
end
The problems
Inserted and Deleted lost the context when I use the exec function
Seems that colnumber it isn't always a correlative number, seems if you create a table with 20 columns and you delete one and create another, the last one have a number > #colcount
I was looking for a solution for all over the net but I couln't figure out
Any Idea?
Thanks!
This highlights a greater problem with structural choice. Try to write a set-based solution. Remove the loop and dynamic SQL and write a single statement that inserts the Audit rows. It is possible but to make it easier consider a different table layout, like keeping all columns on 1 row instead of splitting them.
In SQL 2000 use syscolumns. In SQL 2005+ use sys.columns. i.e.
SELECT column_id FROM sys.columns WHERE object_id = OBJECT_ID(DB_NAME()+'.dbo.Table');
#Santiago : If you still want to write it in dynamic SQL, you should prepare all of the statements first then execute them.
8000 characters may not be enough for all the statements. A good solution is to use a table to store them.
IF NOT OBJECT_ID('tempdb..#stmt') IS NULL
DROP TABLE #stmt;
CREATE TABLE #stmt (ID int NOT NULL IDENTITY(1,1), SQL varchar(8000) NOT NULL);
Then replace the line exec(#sql) with INSERT INTO #stmt (SQL) VALUES (#sql);
Then exec each row.
WHILE EXISTS (SELECT TOP 1 * FROM #stmt)
BEGIN
BEGIN TRANSACTION;
EXEC (SELECT TOP 1 SQL FROM #stmt ORDER BY ID);
DELETE FROM #stmt WHERE ID = (SELECT MIN(ID) FROM #stmt);
COMMIT TRANSACTION;
END
Remember to use sys.columns for the column loop (I shall assume you use SQL 2005/2008).
SET #col = 0;
WHILE EXISTS (SELECT TOP 1 * FROM sys.columns WHERE object_id = OBJECT_ID('MyTable') AND column_id > #col)
BEGIN
SELECT TOP 1 #col = column_id FROM sys.columns
WHERE object_id = OBJECT_ID('MyTable') AND column_id > #col ORDER BY column_id ASC;
SET #sql ....
INSERT INTO #stmt ....
END
Remove line 4 #colcount int and the proceeding comma. Remove Information schema select.
DO not ever use any kind of looping a trigger. Do not use dynamic SQl or call a stored proc or send an email.All of these things are exretemly inappropriate in a trigger.
If tyou want to use dynamic sql use it to create the script to create the trigger. And create an audit table for every table you want audited (we actually have two for every table) or you will have performance problems due to locking on the "one table to rule them all".

Resources