Statistics update different based on compatibility modes - sql-server

I'm trying to understand why statistics are behaving differently with compatibility between 130-150 and 160.
Here are two test scenarios
Test scenarios 1:
USE master;
GO
IF EXISTS(SELECT * FROM sys.databases WHERE name = 'StatisticsUpdating')
BEGIN
ALTER DATABASE StatisticsUpdating SET SINGLE_USER WITH ROLLBACK IMMEDIATE;
DROP DATABASE StatisticsUpdating;
END
GO
CREATE DATABASE StatisticsUpdating;
GO
ALTER DATABASE StatisticsUpdating SET
COMPATIBILITY_LEVEL = 160,
AUTO_UPDATE_STATISTICS ON,
AUTO_CREATE_STATISTICS ON;
GO
USE StatisticsUpdating;
CREATE TABLE StatMan
(
Id INT NOT NULL IDENTITY PRIMARY KEY,
StatName VARCHAR(100),
StatDescription VARCHAR(200)
) ON [PRIMARY];
GO
CREATE TABLE StatInformation
(
Id INT NOT NULL IDENTITY PRIMARY KEY,
StatManId INT NOT NULL,
StatLineItem VARCHAR(200)
) ON [PRIMARY];
GO
SET NOCOUNT ON;
INSERT INTO StatMan(StatName, StatDescription)
VALUES('Test3', 'Test Description');
GO 1000
SELECT *
FROM StatMan sm
WHERE sm.Id = 200
OPTION(RECOMPILE);
GO
SET NOCOUNT ON;
INSERT INTO StatMan(StatName, StatDescription)
VALUES('Test3', 'Test Description');
GO 60000
SELECT 'DONE';
When viewing the modification_counter with the below query:
USE StatisticsUpdating;
SELECT s.object_id, sp.modification_counter, s.name,
CONCAT('USE ', DB_NAME(), ';', ' DBCC SHOW_STATISTICS(', QUOTENAME(OBJECT_NAME(s.object_id)), ', ', QUOTENAME(s.name), ');') AS ViewHistogram
FROM sys.stats s
OUTER APPLY sys.dm_db_stats_properties(s.object_id, s.stats_id) sp
WHERE s.object_id = OBJECT_ID('StatMan');
I get a modification_counter of 60,000, as expected.
After running the below query:
SELECT *
FROM StatMan sm
WHERE sm.Id = 200
OPTION(RECOMPILE);
modification_counter is0 as expected.
Everything is working as expected so far.
Now running the same test but switching the COMPATIBILITY_LEVEL to either 130, 140, or 150.
When viewing the modification_counter with the below query:
USE StatisticsUpdating;
SELECT s.object_id, sp.modification_counter, s.name,
CONCAT('USE ', DB_NAME(), ';', ' DBCC SHOW_STATISTICS(', QUOTENAME(OBJECT_NAME(s.object_id)), ', ', QUOTENAME(s.name), ');') AS ViewHistogram
FROM sys.stats s
OUTER APPLY sys.dm_db_stats_properties(s.object_id, s.stats_id) sp
WHERE s.object_id = OBJECT_ID('StatMan');
I get a modification_counter of NULL, which is not expected.
When I run
SELECT *
FROM StatMan sm
WHERE sm.Id = 200
OPTION(RECOMPILE);
Once again modification_counter is NULL, which is not expected.
So I'm really confused about why modification_counter is never getting any modifications, no matter how many rows are in the table. Even running
USE StatisticsUpdating;
DBCC SHOW_STATISTICS([StatMan], [PK__StatMan__3214EC078268D31F]);
Shows no stats.
Test scenarios 2:
USE master;
GO
IF EXISTS(SELECT * FROM sys.databases WHERE name = 'StatisticsUpdating')
BEGIN
ALTER DATABASE StatisticsUpdating SET SINGLE_USER WITH ROLLBACK IMMEDIATE;
DROP DATABASE StatisticsUpdating;
END
GO
CREATE DATABASE StatisticsUpdating;
GO
ALTER DATABASE StatisticsUpdating SET
COMPATIBILITY_LEVEL = 160,
AUTO_UPDATE_STATISTICS ON,
AUTO_CREATE_STATISTICS ON;
GO
USE StatisticsUpdating;
CREATE TABLE StatMan
(
Id INT NOT NULL IDENTITY PRIMARY KEY,
StatName VARCHAR(100),
StatDescription VARCHAR(200)
) ON [PRIMARY];
GO
CREATE TABLE StatInformation
(
Id INT NOT NULL IDENTITY PRIMARY KEY,
StatManId INT NOT NULL,
StatLineItem VARCHAR(200)
) ON [PRIMARY];
GO
SET NOCOUNT ON;
INSERT INTO StatMan(StatName, StatDescription)
VALUES('Test3', 'Test Description');
INSERT INTO StatInformation(StatManId, StatLineItem)
SELECT Id, CONCAT('Test Item', CAST(Id AS CHAR(200)))
FROM StatMan
WHERE Id NOT IN
(
SELECT StatManId
FROM StatInformation
);
GO 1000
SELECT *
FROM StatMan sm
WHERE sm.Id = 200
OPTION(RECOMPILE);
GO
SET NOCOUNT ON;
INSERT INTO StatMan(StatName, StatDescription)
VALUES('Test3', 'Test Description');
GO 60000
SELECT 'DONE';
When viewing the modification_counter with the below query:
USE StatisticsUpdating;
SELECT s.object_id, sp.modification_counter, s.name,
CONCAT('USE ', DB_NAME(), ';', ' DBCC SHOW_STATISTICS(', QUOTENAME(OBJECT_NAME(s.object_id)), ', ', QUOTENAME(s.name), ');') AS ViewHistogram
FROM sys.stats s
OUTER APPLY sys.dm_db_stats_properties(s.object_id, s.stats_id) sp
WHERE s.object_id = OBJECT_ID('StatMan');
I get a modification_counter of 60,498, which is expected.
When I run
SELECT *
FROM StatMan sm
WHERE sm.Id = 200
OPTION(RECOMPILE);
modification_counter is 0, as expected.
Now running the same test but switching the COMPATIBILITY_LEVEL to either 130, 140, or 150.
When viewing the modification_counter with the below query:
USE StatisticsUpdating;
SELECT s.object_id, sp.modification_counter, s.name,
CONCAT('USE ', DB_NAME(), ';', ' DBCC SHOW_STATISTICS(', QUOTENAME(OBJECT_NAME(s.object_id)), ', ', QUOTENAME(s.name), ');') AS ViewHistogram
FROM sys.stats s
OUTER APPLY sys.dm_db_stats_properties(s.object_id, s.stats_id) sp
WHERE s.object_id = OBJECT_ID('StatMan');
I get a modification_counter of 60,498, which is expected. Go so far!
When I run
SELECT *
FROM StatMan sm
WHERE sm.Id = 200
OPTION(RECOMPILE);
It's still at 60,498! It didn't do a stat update even though I was expecting it to.
So now I try
SELECT *
FROM StatMan sm
JOIN StatInformation si ON sm.Id = si.StatManId
WHERE sm.Id = 200
OPTION(RECOMPILE);
modification_counter is still at 60,498! Once again, still didn't update stats.
Than I tried
SELECT *
FROM StatMan sm
JOIN StatInformation si ON sm.Id = si.StatManId
WHERE sm.Id = 200
OPTION(RECOMPILE);
modification_counter once again at 60,498! Once again, still didn't update stats.
So to get stats to update, I ended up having to run
INSERT INTO StatInformation(StatManId, StatLineItem)
SELECT TOP 101 Id, CONCAT('Test Item', CAST(Id AS CHAR(200)))
FROM StatMan
WHERE Id NOT IN
(
SELECT StatManId
FROM StatInformation
);
Where SELECT TOP 101 is the magic number for table StatInformation to get a stat update.
So I'm not exactly sure why it's behaving differently between 130-150 and 160. Especially when all Microsoft documents say compatibility level 130 and above should behave the same.
Does anyone know what may be causing this?

Related

Executing Create Trigger throws sometimes an error depending on the query-structure including a variable

I have three TSQL-statements, each are supposed to create a trigger when executed.
I get only a partly parameterised statement to run.
What i dont understand is, why SSMS is executing the one statement and throwing an error with the two others. Any help is much appreciated.
This does NOT work: executing the statement without any variable
IF NOT EXISTS (SELECT * FROM sys.objects WHERE [type] = 'TR' AND SCHEMA_NAME(schema_id) = 'D365_del' AND [name] = 'trg_Table_del')
BEGIN
CREATE TRIGGER [D365].[trg_Table_del] ON [D365].[Table] AFTER DELETE AS INSERT INTO [D365_del].[Table] ([ID], [Action],[ModifiedDate])(SELECT [ID], 1,SYSDATETIME() from DELETED)
END
This works: putting part of it into a variable
declare #SQL nvarchar(4000)
set #SQL = 'CREATE TRIGGER [D365].[trg_Table_del] ON [D365].[Table] AFTER DELETE AS INSERT INTO [D365_del].[Table] ([ID], [Action],[ModifiedDate])(SELECT [ID], 1,SYSDATETIME() from DELETED) '
IF NOT EXISTS (SELECT * FROM sys.objects WHERE [type] = 'TR' AND SCHEMA_NAME(schema_id) = 'D365_del' AND [name] = 'trg_Table_del')
BEGIN
EXEC (#SQL)
END
This does not work: putting all of the statement into a variable
declare #SQL nvarchar(4000)
set #SQL = 'IF NOT EXISTS (SELECT * FROM sys.objects WHERE [type] = ''TR'' AND SCHEMA_NAME(schema_id) = ''D365_del'' AND [name] = ''trg_Table_del'') BEGIN CREATE TRIGGER [D365].[trg_Table_del] ON [D365].[Table] AFTER DELETE AS INSERT INTO [D365_del].[Table] ([ID], [Action],[ModifiedDate])(SELECT [ID], 1,SYSDATETIME() from DELETED) END'
EXEC (#SQL)
In both cases where it doesn't work i get the same error-message:
Msg 156, Level 15, State 1, Line 1
Incorrect syntax near the keyword 'TRIGGER'.
I am using:
SQL Server Management Studio 15.0.18424.0
Windows Operating System 10.0.22000
SQL-Server 12.0.2000.8
CREATE TRIGGER must be the only statement in the batch. To address the problem:
option 1: Add a conditional DROP followed by a GO batch separator and CREATE TRIGGER:
IF EXISTS (SELECT * FROM sys.objects WHERE [type] = 'TR' AND SCHEMA_NAME(schema_id) = 'D365_del' AND [name] = 'trg_Table_del')
BEGIN
DROP TRIGGER [D365].[trg_Table_del].
END
GO
CREATE TRIGGER [D365].[trg_Table_del] ON [D365].[Table]
AFTER DELETE AS
INSERT INTO [D365_del].[Table] ([ID], [Action],[ModifiedDate])
(SELECT [ID], 1,SYSDATETIME() from DELETED)
GO
option 2: Use CREATE OR ALTER:
CREATE OR ALTER TRIGGER [D365].[trg_Table_del] ON [D365].[Table]
AFTER DELETE AS
INSERT INTO [D365_del].[Table] ([ID], [Action],[ModifiedDate])
(SELECT [ID], 1,SYSDATETIME() from DELETED)
GO
option 3 (which you've already discovered): Use dynamic SQL so CREATE TRIGGER is in a separate batch.

How to DELETE on a table using the table's name

I'm trying to write a trigger that checks for if a bit is true before deleting a table, and setting it as inactive if said bit is false. My trigger was this:
DELETE Contacts
FROM Contacts
INNER JOIN deleted ON Contacts.ContactID = deleted.ContactID
WHERE deleted.AllowDelete = 1
UPDATE Contacts
SET Active = 0
FROM Contacts
INNER JOIN deleted on Contacts.ContactID = deleted.ContactID
WHERE deleted.allowDelete = 0
Where Contacts is the table someone is trying to delete from. However, using this trigger on every table where it is necessary seemed inefficient, so I'm trying to normalize it with a stored procedure.
The idea is to exec the SP with the tablename as a variable, and the deleted table put into a temptable. Right now the trigger looks like this:
SELECT *
INTO #deleted
FROM deleted
DROP TABLE #deleted
And the SP looks like this:
ALTER PROCEDURE [dbo].[OnDeleteTrigger]
-- Add the parameters for the stored procedure here
#TableToDeleteFrom nvarchar(max) = ''
AS
BEGIN
SET NOCOUNT ON;
DECLARE
DELETE Contacts
FROM Contacts
INNER JOIN #deleted ON Contacts.ContactID = #deleted.ContactID
WHERE #deleted.AllowDelete = 1
Update Contacts
SET Active = 0
FROM Contacts
INNER JOIN #deleted ON Contacts.ContactID = #deleted.ContactID
WHERE #deleted.AllowDelete = 1
END
The deleted temptable seems to work fine, although I can't test it yet as I can't find a way to get the table dbo from the table name, to replace all the 'Contacts'.
Hopefully this is enough information to get an answer, if not I'll edit it later.
Having a separate trigger on each table with static SQL is more efficient, albeit doesn't lend itself well to code reuse. The other tables will have different primary key column names than ContactID so you would need to pass that name as well as the table name.
With the desired tables tables all having standard columns Active and AllowDelete, you could create the needed triggers dynamically during deployment rather than dynamic SQL at run time. Below is an example of this technique, using MERGE instead of separate INSERT and UPDATE statements:
CREATE TABLE dbo.Contacts(
ContactID int NOT NULL CONSTRAINT PK_Contacts PRIMARY KEY
, Active bit NOT NULL
, AllowDelete bit NOT NULL
);
CREATE TABLE dbo.Users(
UserID int NOT NULL CONSTRAINT PK_Users PRIMARY KEY
, Active bit NOT NULL
, AllowDelete bit NOT NULL
);
INSERT INTO dbo.Contacts VALUES(1, 1, 0);
INSERT INTO dbo.Contacts VALUES(2, 1, 0);
INSERT INTO dbo.Contacts VALUES(3, 1, 1);
INSERT INTO dbo.Contacts VALUES(4, 1, 1);
INSERT INTO dbo.Users VALUES(1, 1, 0);
INSERT INTO dbo.Users VALUES(2, 1, 0);
INSERT INTO dbo.Users VALUES(3, 1, 1);
INSERT INTO dbo.Users VALUES(4, 1, 1);
GO
CREATE PROC dbo.CreateSoftDeleteTrigger
#SchemaName sysname
, #TableName sysname
, #JoinCriteria nvarchar(MAX)
AS
SET NOCOUNT ON;
DECLARE #SQL nvarchar(MAX);
SET #SQL = N'CREATE TRIGGER ' + QUOTENAME(N'TRD_' + #TableName) + N'
ON ' + QUOTENAME(#SchemaName) + N'.' + QUOTENAME(#TableName) + N'
INSTEAD OF DELETE
AS
SET NOCOUNT ON;
MERGE ' + QUOTENAME(#SchemaName) + N'.' + QUOTENAME(#TableName) + N' AS target
USING deleted ON ' + #JoinCriteria + N'
WHEN MATCHED AND deleted.AllowDelete = 1 THEN DELETE
WHEN MATCHED AND deleted.AllowDelete = 0 THEN UPDATE SET Active = 0;'
EXECUTE(#SQL);
GO
EXEC dbo.CreateSoftDeleteTrigger
#SchemaName = N'dbo'
, #TableName = N'Contacts'
, #JoinCriteria = N'target.ContactID = deleted.ContactID';
EXEC dbo.CreateSoftDeleteTrigger
#SchemaName = N'dbo'
, #TableName = N'Users'
, #JoinCriteria = N'target.UserID = deleted.UserID ';
GO
--soft delete test
DELETE FROM dbo.Contacts WHERE ContactID = 1;
SELECT * FROM Contacts;
--hard delete test
DELETE FROM dbo.Users WHERE UserID = 4;
SELECT * FROM Users;
GO
I guess you want to use the trigger "instead of":
https://technet.microsoft.com/en-us/library/ms175521(v=sql.105).aspx
CREATE TABLE Contacts
(
id int identity(1,1) primary key clustered,
[name] varchar(50),
isactive bit not null default(1),
SoftDeletion bit not null default(1)
)
insert into Contacts([name]) values ('my'),('myself'),('I');
insert into Contacts([name],SoftDeletion) values ('Killable', 0);
GO
CREATE TRIGGER trgInsteadOfDelete ON Contacts
INSTEAD OF DELETE
AS
BEGIN
UPDATE Contacts
set isactive = 0
where
SoftDeletion = 1
and
id in (SELECT ID from deleted);
DELETE FROM Contacts
WHERE
softdeletion = 0
AND
id in (SELECT ID from deleted);
END
GO
SELECT * from Contacts;
DELETE FROM Contacts;
SELECT * from Contacts;

how to get all the records added for 1 last minute

I am new to SQL.
I am using SQL Server 2008.
I am trying to get records from table added for within last 1 minute.
actually I have SP which get called after every 1 minute 60sec, and I do not have any column which saved modified date, so I have to select all the rows added within minute of time interval.
It may happen no rows added or N number of rows added within last minute, so i need to get all of them added with 1 minute of timer interval, since SP will get fired after every 1 Min.
To get all records in a table that have been inserted in the last minute, you would need a date field on your table that either has a DEFAULT GETDATE() constraint, or insert the date manually when you insert the record.
SELECT *
FROM MyTable
WHERE MyDateColumn >= DATEADD(mi, -1, GETDATE())
And the create a column with the default on an existing table:
ALTER TABLE MyTable
ADD MyDateColumn DATETIME2 NOT NULL DEFAULT GETDATE()
You might be able to get table / row metadata from the sys or INFORMATION_SCHEMA schemas, but I really wouldn't recommend it.
Saying that, maybe your application / database is suitable for SQL Server Auditing?
http://technet.microsoft.com/en-us/library/cc280386.aspx
UPDATE
This doesn't exactly answer your question, but it might be a viable solution. I wrote a cursor to create and apply audit triggers to all the tables in a database a while back. The triggers are set up to log to a separate table (which is currently hard coded). You can modify this for your needs, but I've included all the relevant code below. I imagine you will need to remove the UserId and its constraints:
-- Stores the types of audit entries for the "Logs" table, such as "RECORD CREATED", "RECORD MODIFIED", "RECORD DELETED" and other, miscellaneous log types
CREATE TABLE dbo.LogTypes (
LogTypeID INT NOT NULL IDENTITY PRIMARY KEY,
[Description] VARCHAR(100) NOT NULL UNIQUE
)
CREATE TABLE dbo.[Logs] (
LogID INT NOT NULL IDENTITY PRIMARY KEY,
LogTypeID INT NOT NULL FOREIGN KEY REFERENCES LogTypes(LogTypeID),
UserID INT NOT NULL FOREIGN KEY REFERENCES Users(UserID), -- User that created this log entry.
SysObjectID INT, -- The ID of the table in sysobjects (if this is a CRUD insert from a trigger)
Details VARCHAR(1000),
DateCreated DATETIME NOT NULL DEFAULT GETDATE()
)
-----------------------------------------------------------------------------
-- Log types
-----------------------------------------------------------------------------
SET IDENTITY_INSERT LogTypes ON
INSERT INTO LogTypes (LogTypeID, [Description])
VALUES(1, 'Record Insert')
INSERT INTO LogTypes (LogTypeID, [Description])
VALUES(2, 'Record Update')
INSERT INTO LogTypes (LogTypeID, [Description])
VALUES(3, 'Record Deletion')
INSERT INTO LogTypes (LogTypeID, [Description])
VALUES(4, 'User logged in')
INSERT INTO LogTypes (LogTypeID, [Description])
VALUES(5, 'User logged out')
SET IDENTITY_INSERT LogTypes OFF
Cursor code (run once):
DECLARE #table_name VARCHAR(500), #instruction VARCHAR(MAX)
DECLARE curTables CURSOR READ_ONLY FAST_FORWARD FOR
SELECT TABLE_NAME
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_NAME <> 'Logs'
OPEN curTables
FETCH NEXT FROM curTables INTO #table_name
WHILE ##FETCH_STATUS = 0
BEGIN
-- Drop any existing trigger
SET #instruction = 'IF OBJECT_ID (''tr_' + #table_name + '_Audit'',''TR'') IS NOT NULL DROP TRIGGER tr_' + #table_name + '_Audit;'
exec sp_sqlexec #instruction
-- Create the new trigger
SET #instruction = 'CREATE TRIGGER tr_' + #table_name + '_Audit
ON ' + #table_name + '
AFTER INSERT, UPDATE, DELETE
AS
SET NOCOUNT ON
DECLARE #LogTypeID INT, #SystemUserID INT, #SysObjectID INT, #TableName VARCHAR(500)
SET #SystemUserID = 1 -- System account
SET #TableName = ''' + #table_name + '''
IF EXISTS(SELECT * FROM Inserted) AND EXISTS(SELECT * FROM Deleted)
SET #LogTypeID = 2 -- Update
ELSE IF EXISTS(SELECT * FROM Inserted)
SET #LogTypeID = 1 -- Insertion
ELSE IF EXISTS(SELECT * FROM Deleted)
SET #LogTypeID = 3 -- Deletion
SET #LogTypeID = ISNULL(#LogTypeID, 0)
IF #LogTypeID > 0
BEGIN
-- Only log if successful
SELECT
#SysObjectID = id
FROM sysobjects (nolock)
where [name] = #TableName
AND [type] = ''U''
INSERT INTO [Logs] (LogTypeID, UserID, SysObjectID, Details, DateCreated)
VALUES(#LogTypeID, #SystemUserID, #SysObjectID, NULL, GETDATE())
END'
exec sp_sqlexec #instruction
FETCH NEXT FROM curTables INTO #table_name
END
CLOSE curTables
DEALLOCATE curTables
Every table in your database will now log all INSERTs, UPDATEs and DELETEs to the Log table. However, be aware that adding triggers to your tables increases I/O and memory usage, subsequently decreasing performance. It may not be a massive problem for you, though.

How to know the data type of a field in trigger?

How can i know the data type of a field inside a trigger. I am able to get the field name and it's value inside a trigger after insert as follows:
DECLARE #AfterInserted XML
SELECT #AfterInserted = (
SELECT *
FROM INSERTED
WHERE User_Key = User_Key
FOR XML RAW, ROOT
);
CREATE TABLE #XML(
FieldName nvarchar(250),
Value nvarchar(250));
Insert Into #XML(FieldName, Value)
select T.N.value('local-name(.)', 'nvarchar(100)'),
T.N.value('.', 'nvarchar(250)')
from #AfterInserted.nodes('/root/row/#*') as T(N)
I also need data type of the field too. something like T.N.value('Data-type')?
Thanks
Not exactly sure if this will work for your purpose, but:
SELECT SQL_VARIANT_PROPERTY(your_column, 'BaseType')
FROM your_table
Will return a column's field type as NVARCHAR.
You can also use Precision, Scale, MaxLength, Collation and TotalBytes as the 2nd parameter for SQL_VARIANT_PROPERTY.
There is no need to use XML to get meta-data related to the table that the Trigger is on. You can query the system catalog to get the info directly.
The Trigger is an object with an object_id, so the ##PROCID system variable will have the object_id of the Trigger itself, within the context of the Trigger.
Using the value of ##PROCID, you can look in sys.objects at the row for that specific object_id and the parent_object_id field will be the object_id of the Table that the Trigger is on.
Using the value of parent_object_id, you can query sys.tables / sys.objects to get table-level info, or query sys.columns to get column-level info.
The example below illustrates the above info:
SET ANSI_NULLS ON
SET QUOTED_IDENTIFIER ON
SET NOCOUNT ON
GO
-- DROP TABLE dbo.TriggerTest
IF (OBJECT_ID('dbo.TriggerTest') IS NULL)
BEGIN
PRINT 'Creating TriggerTest...'
CREATE TABLE dbo.TriggerTest (Col1 INT, Col2 VARCHAR(30), Col3 DATETIME)
END
--
IF (OBJECT_ID('dbo.TR_TriggerTest_IU') IS NOT NULL)
BEGIN
PRINT 'Dropping TR_TriggerTest_IU...'
DROP TRIGGER dbo.TR_TriggerTest_IU
END
GO
PRINT 'Creating TR_TriggerTest_IU...'
GO
CREATE TRIGGER dbo.TR_TriggerTest_IU
ON dbo.TriggerTest
AFTER INSERT, UPDATE
AS
BEGIN
SET NOCOUNT ON
-- get Table info
SELECT *
FROM sys.tables st
WHERE st.[object_id] = ( -- get parent_object_id of this Trigger
-- which will be the table that the
-- Trigger is on
SELECT so.parent_object_id
FROM sys.objects so
WHERE so.[object_id] = ##PROCID
)
-- get Column info
SELECT *
FROM sys.columns sc
-- Custom types will repeat values of system_type_id and produce a
-- cartesian product, so filter using "is_user_defined = 0"
INNER JOIN sys.types st
ON st.system_type_id = sc.system_type_id
AND st.is_user_defined = 0
WHERE sc.[object_id] = ( -- get parent_object_id of this Trigger
-- which will be the table that the
-- Trigger is on
SELECT so.parent_object_id
FROM sys.objects so
WHERE so.[object_id] = ##PROCID
)
END
GO
INSERT INTO dbo.TriggerTest (Col1, Col2, Col3) VALUES (1, 'a', GETDATE())

SQL-Server Trigger on update for Audit

I can't find an easy/generic way to register to an audit table the columns changed on some tables.
I tried to do it using a Trigger on after update in this way:
First of all the Audit Table definition:
CREATE TABLE [Audit](
[Id] [int] IDENTITY(1,1) NOT NULL,
[Date] [datetime] NOT NULL default GETDATE(),
[IdTypeAudit] [int] NOT NULL, --2 for Modify
[UserName] [varchar](50) NULL,
[TableName] [varchar](50) NOT NULL,
[ColumnName] [varchar](50) NULL,
[OldData] [varchar](50) NULL,
[NewData] [varchar](50) NULL )
Next a trigger on AFTER UPDATE in any table:
DECLARE
#sql varchar(8000),
#col int,
#colcount int
select #colcount = count(*) from INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = 'MyTable'
set #col = 1
while(#col < #colcount )
begin
set #sql=
'INSERT INTO Audit
SELECT 2, UserNameLastModif, ''MyTable'', COL_NAME(Object_id(''MyTable''), '+ convert(varchar,#col) +'), Deleted.'
+ COL_NAME(Object_id('MyTable'), #col) + ', Inserted.' + COL_NAME(Object_id('MyTable'), #col) + '
FROM Inserted LEFT JOIN Deleted ON Inserted.[MyTableId] = Deleted.[MyTableId]
WHERE COALESCE(Deleted.' + COL_NAME(Object_id('MyTable'), #col) + ', '''') <> COALESCE(Inserted.' + COL_NAME(Object_id('MyTable'), #col) + ', '''')'
--UserNameLastModif is an optional column on MyTable
exec(#sql)
set #col = #col + 1
end
The problems
Inserted and Deleted lost the context when I use the exec function
Seems that colnumber it isn't always a correlative number, seems if you create a table with 20 columns and you delete one and create another, the last one have a number > #colcount
I was looking for a solution for all over the net but I couln't figure out
Any Idea?
Thanks!
This highlights a greater problem with structural choice. Try to write a set-based solution. Remove the loop and dynamic SQL and write a single statement that inserts the Audit rows. It is possible but to make it easier consider a different table layout, like keeping all columns on 1 row instead of splitting them.
In SQL 2000 use syscolumns. In SQL 2005+ use sys.columns. i.e.
SELECT column_id FROM sys.columns WHERE object_id = OBJECT_ID(DB_NAME()+'.dbo.Table');
#Santiago : If you still want to write it in dynamic SQL, you should prepare all of the statements first then execute them.
8000 characters may not be enough for all the statements. A good solution is to use a table to store them.
IF NOT OBJECT_ID('tempdb..#stmt') IS NULL
DROP TABLE #stmt;
CREATE TABLE #stmt (ID int NOT NULL IDENTITY(1,1), SQL varchar(8000) NOT NULL);
Then replace the line exec(#sql) with INSERT INTO #stmt (SQL) VALUES (#sql);
Then exec each row.
WHILE EXISTS (SELECT TOP 1 * FROM #stmt)
BEGIN
BEGIN TRANSACTION;
EXEC (SELECT TOP 1 SQL FROM #stmt ORDER BY ID);
DELETE FROM #stmt WHERE ID = (SELECT MIN(ID) FROM #stmt);
COMMIT TRANSACTION;
END
Remember to use sys.columns for the column loop (I shall assume you use SQL 2005/2008).
SET #col = 0;
WHILE EXISTS (SELECT TOP 1 * FROM sys.columns WHERE object_id = OBJECT_ID('MyTable') AND column_id > #col)
BEGIN
SELECT TOP 1 #col = column_id FROM sys.columns
WHERE object_id = OBJECT_ID('MyTable') AND column_id > #col ORDER BY column_id ASC;
SET #sql ....
INSERT INTO #stmt ....
END
Remove line 4 #colcount int and the proceeding comma. Remove Information schema select.
DO not ever use any kind of looping a trigger. Do not use dynamic SQl or call a stored proc or send an email.All of these things are exretemly inappropriate in a trigger.
If tyou want to use dynamic sql use it to create the script to create the trigger. And create an audit table for every table you want audited (we actually have two for every table) or you will have performance problems due to locking on the "one table to rule them all".

Resources