I have created an audit table that is populated by an audit Trail (triggers after every update, delete, and insert) on different tables in my database. I am now asked to create a stored procedure (script) to rollback the data change using the audit id. How do I go about do so. I wrote a script which seems good. The command is accepted by SQL Server (command completed Successfully). Unfortunately when I test it by passing the Audit_id, the command is completed but the data is not rolled back. This is the Procedure I developed. Any help will be greatly appreciated.
create PROCEDURE [dbo].[spAudit_Rollback_2]
#AUDIT_ID NVARCHAR(MAX)
AS
SET Nocount on
BEGIN
DECLARE
#TABLE_NAME VARCHAR(100),
#COLUMN VARCHAR(100),
#OLD_VALUE VARCHAR(200),
#ID varchar(50)
SELECT #TABLE_NAME = TABLE_NAME FROM AUDIT;
SELECT #COLUMN = [COLUMN] FROM AUDIT;
SELECT #AUDIT_ID = AUDIT_ID FROM AUDIT;
SELECT #OLD_VALUE = OLD_VALUE FROM AUDIT
SELECT #ID = ROW_DESCRIPTION FROM AUDIT;
update [Production].[UnitMeasure]
set #COLUMN = #OLD_VALUE
WHERE [Production].[UnitMeasure].[UnitMeasureCode] = #ID
END
[dbo].[spAudit_Rollback_2]'130F0598-EB89-44E5-A64A-ABDFF56809B5
This is the same script but using adventureworks2017 database and data.
If possible I would even prefer to use a variable to retrieve that table name from Audit and use that in the procedure. That too is giving me another error.
Any help with this procedure will be awesome.
This needs to be dynamic SQL because you're updating a column that's defined in a variable. Do the following in place of your current UPDATE statement.
DECLARE #sql VARCHAR(1000) = ''
SET #sql = 'UPDATE [Production].[UnitMeasure] ' +
'SET ' + #COLUMN + ' = ''' + #OLD_VALUE + '''' +
'WHERE [Production].[UnitMeasure].[UnitMeasureCode] = ''' + #ID + ''''
EXEC(#sql)
In one of our SQL Server databases we have many SQL views. One particular view keeps disappearing every few weeks, and I want to find out what is happening.
Is there a way to query SQL Server to find out when and who dropped the view?
Alternatively, is it possible to add a SQL Server trigger on the DROP view command to capture and fail the DROP?
This information is written to the default trace. Below is an example query to glean the information.
SELECT
te.name
,tt.DatabaseName
,tt.StartTime
,tt.HostName
,tt.LoginName
,tt.ApplicationName
,tt.LoginName
FROM sys.traces AS t
CROSS APPLY fn_trace_gettable(
--get trace folder and add base file name log.trc
REVERSE(SUBSTRING(REVERSE(t.path), CHARINDEX(N'\', REVERSE(t.path)), 128)) + 'log.trc', default) AS tt
JOIN sys.trace_events AS te ON
te.trace_event_id = tt.EventClass
JOIN sys.trace_subclass_values AS tesv ON
tesv.trace_event_id = tt.EventClass
AND tesv.subclass_value = tt.EventSubClass
WHERE
t.is_default = 1 --default trace
AND tt.ObjectName = N'YourView'
AND tt.DatabaseName = N'YourDatabase';
Note the default trace is a rollover trace that keeps a maximum of 100MB so it might not have the forensic info if the view was recreated a while ago.
Yes, this is a DDL trigger. Sample trigger text is included in MSDN article about this kind of triggers. I'd say such a trigger is a must on production database for auditing reasons.
https://technet.microsoft.com/en-us/library/ms187909.aspx
Another trick is to create dependent on this object (view) another object (view?) with SCHEMA_BINDING option. This will make impossible to drop any object schema-bound object depends on.
To expand on another answer, here is some code to get started with a DDL trigger for DROP_VIEW. As an example, let's suppose someone dropped the view [HumanResources].[vEmployee] from the AdventureWorks database. The EVENTDATA() will look something like this:
<EVENT_INSTANCE>
<EventType>DROP_VIEW</EventType>
<PostTime>2016-02-26T09:02:58.190</PostTime>
<SPID>60</SPID>
<ServerName>YourSqlHost\SQLEXPRESS</ServerName>
<LoginName>YourDomain\SomeLogin</LoginName>
<UserName>dbo</UserName>
<DatabaseName>AdventureWorks2012</DatabaseName>
<SchemaName>HumanResources</SchemaName>
<ObjectName>vEmployee</ObjectName>
<ObjectType>VIEW</ObjectType>
<TSQLCommand>
<SetOptions ANSI_NULLS="ON" ANSI_NULL_DEFAULT="ON" ANSI_PADDING="ON" QUOTED_IDENTIFIER="ON" ENCRYPTED="FALSE" />
<CommandText>DROP VIEW [HumanResources].[vEmployee]
</CommandText>
</TSQLCommand>
</EVENT_INSTANCE>
And here is a possible DDL trigger statement:
CREATE TRIGGER trgDropView
ON DATABASE
FOR DROP_VIEW
AS
BEGIN
--Grab some pertinent items from EVENTDATA()
DECLARE #LoginName NVARCHAR(MAX) = EVENTDATA().value('(/EVENT_INSTANCE/LoginName)[1]', 'NVARCHAR(MAX)')
DECLARE #TsqlCmd NVARCHAR(MAX) = EVENTDATA().value('(/EVENT_INSTANCE/TSQLCommand/CommandText)[1]','NVARCHAR(MAX)')
--Now do something. Lots of possibilities. Here are two:
--1) Send Email
DECLARE #Subj NVARCHAR(255) = ##SERVERNAME + ' - VIEW DROPPED'
DECLARE #MsgBody NVARCHAR(255) = 'Login Name: ' + #LoginName + CHAR(13) + CHAR(10) +
'Command: ' + #TsqlCmd
EXEC msdb..sp_send_dbmail
#recipients = 'You#YourDomain.com',
#subject = #Subj,
#body = #MsgBody
--2) Log an error
DECLARE #ErrMsg NVARCHAR(MAX) = ##SERVERNAME + ' - VIEW DROPPED' + CHAR(13) + CHAR(10) +
'Login Name: ' + #LoginName + CHAR(13) + CHAR(10) +
'Command: ' + #TsqlCmd
RAISERROR(#ErrMsg, 16, 1) WITH LOG;
END
You can also create a trigger at the server level in order to capture and log DDL changes on the database :
CREATE TRIGGER [Trg_AuditStoredProcedures_Data]
ON ALL SERVER
FOR CREATE_PROCEDURE,ALTER_PROCEDURE,DROP_PROCEDURE,CREATE_TABLE,ALTER_TABLE,
DROP_TABLE,CREATE_FUNCTION,ALTER_FUNCTION,DROP_FUNCTION,CREATE_VIEW,ALTER_VI EW,
DROP_VIEW,CREATE_DATABASE,DROP_DATABASE,ALTER_DATABASE,
CREATE_TRIGGER,DROP_TRIGGER,ALTER_TRIGGER
AS
SET ANSI_PADDING ON
DECLARE #eventdata XML;
SET #eventdata = EVENTDATA();
SET NOCOUNT ON
/*Create table AuditDatabaseObject in order to have a history tracking for every DDL change on the database*/
INSERT INTO AuditDatabaseObject
(DatabaseName,ObjectName,LoginName,ChangeDate,EventType,EventDataXml,HostName)
VALUES (
#eventdata.value('(/EVENT_INSTANCE/DatabaseName)[1]','sysname')
, #eventdata.value('(/EVENT_INSTANCE/ObjectName)[1]', 'sysname')
, #eventdata.value('(/EVENT_INSTANCE/LoginName)[1]', 'sysname')
, GETDATE()
, #eventdata.value('(/EVENT_INSTANCE/EventType)[1]', 'sysname')
, #eventdata
, HOST_NAME()
);
DECLARE #Valor VARCHAR(30),#EvenType VARCHAR(30)
SET #Valor = #eventdata.value('(/EVENT_INSTANCE/LoginName)[1]', 'sysname')
SET #EvenType = #eventdata.value('(/EVENT_INSTANCE/EventType)[1]', 'sysname')
IF (IS_SRVROLEMEMBER('sysadmin',#Valor) != 1 AND #EvenType = 'DROP_DATABASE')
BEGIN
ROLLBACK
END
you can find more information here EVENTDATA()
if an object is dropped from the database you will see a record created on the table AuditDatabaseObject
also keep in mind security as # Chris Pickford mentioned.
Is there a way to disable certian index DDL operation (create, drop and alter index) for a list of tables in MS SQL Server 2008 R2?
What I was trying to do is to create a DDL trigger that catch these events and roll them back, but it seems that all ddl trigers are after triggers and if table is very large this cause performance issues.
The trigger I am currently using is the following:
CREATE TRIGGER index_guard
ON DATABASE
FOR CREATE_INDEX, DROP_INDEX, ALTER_INDEX
AS
DECLARE #object_name NVARCHAR(50);
DECLARE #table_name NVARCHAR(50);
DECLARE #target_object_type NVARCHAR(20);
DECLARE #object_type NVARCHAR(20);
DECLARE #lookup_value NVARCHAR(100);
DECLARE #protected_indexes TABLE (Name NVARCHAR(50))
INSERT INTO #protected_indexes
SELECT Name FROM (VALUES ('TABLE1/IX_IdName'), ('TABLE2/IX_NameId')) AS tbl(Name)
SELECT #object_name = EVENTDATA().value('(/EVENT_INSTANCE/ObjectName)[1]','nvarchar(max)');
SELECT #table_name = EVENTDATA().value('(/EVENT_INSTANCE/TargetObjectName)[1]','nvarchar(max)');
SELECT #target_object_type = EVENTDATA().value('(/EVENT_INSTANCE/TargetObjectType)[1]','nvarchar(max)');
SELECT #object_type = EVENTDATA().value('(/EVENT_INSTANCE/ObjectType)[1]','nvarchar(max)');
IF #object_type = 'INDEX' AND #target_object_type = 'TABLE'
BEGIN
SET #lookup_value = #table_name + '/' + #object_name
IF EXISTS (SELECT 1 FROM #protected_indexes A WHERE A.Name = #lookup_value)
BEGIN
ROLLBACK
END
END
I am using below code to switch db context to master and create procedure and setup start up script.
BEGIN TRY
DECLARE #dbName NVARCHAR(100)
SET #dbName = DB_NAME()
USE MASTER
IF NOT EXISTS (
SELECT name
FROM sys.objects
WHERE object_id = OBJECT_ID('spSetTrustWorthyOn')
)
EXEC (
'CREATE PROCEDURE spSetTrustWorthyOn
AS
BEGIN
ALTER DATABASE [' + #dbName + '] SET TRUSTWORTHY ON
END'
)
EXECUTE sp_procoption 'spSetTrustWorthyOn'
,'startup'
,'ON'
END TRY
BEGIN CATCH
END CATCH
GO
Now Issue is when I want to switch back to existing database.I could not find any way to go back to my original database.
I also can not hard code the database as this is dynamic query and we have multiple databases.
Any help will be much appreciated.
Thanks
Instead of a USE statement for the master database, qualify the catalog views and use EXEC sp_executesql statement with the master database qualified. This will avoid changing the database context in the outer script.
DECLARE
#dbName sysname = DB_NAME()
,#sql nvarchar(MAX);
BEGIN TRY
IF NOT EXISTS (
SELECT *
FROM master.sys.objects
WHERE object_id = OBJECT_ID(N'spSetTrustWorthyOn')
)
BEGIN
SET #sql = N'CREATE PROCEDURE spSetTrustWorthyOn
AS
BEGIN
ALTER DATABASE ' + QUOTENAME(#dbName) + ' SET TRUSTWORTHY ON;
END;';
EXECUTE master..sp_executesql #sql;
EXECUTE sp_procoption
'spSetTrustWorthyOn'
,'startup'
,'ON';
END;
END TRY
BEGIN CATCH
THROW;
END CATCH;
GO
My database query has been running very fast until it changed to very slow recently. No changed have occurred in the database apart from normal data growth.
I have noticed that the database statistics have "never" been updated.
Is there an easy way that I can update these statistics across my entire database so I can see if that is the problem?
I am using SQL Server 2000 Sp4.
You can use this
CREATE PROC usp_UPDATE_STATISTICS
(#dbName sysname, #sample int)
AS
SET NOCOUNT ON
DECLARE #SQL nvarchar(4000)
DECLARE #ID int
DECLARE #TableName sysname
DECLARE #RowCnt int
CREATE TABLE ##Tables
(
TableID INT IDENTITY(1, 1) NOT NULL,
TableName SYSNAME NOT NULL
)
SET #SQL = ''
SET #SQL = #SQL + 'INSERT INTO ##Tables (TableName) '
SET #SQL = #SQL + 'SELECT [name] '
SET #SQL = #SQL + 'FROM ' + #dbName + '.dbo.sysobjects '
SET #SQL = #SQL + 'WHERE xtype = ''U'' AND [name] <> ''dtproperties'''
EXEC sp_executesql #statement = #SQL
SELECT TOP 1 #ID = TableID, #TableName = TableName
FROM ##Tables
ORDER BY TableID
SET #RowCnt = ##ROWCOUNT
WHILE #RowCnt <> 0
BEGIN
SET #SQL = 'UPDATE STATISTICS ' + #dbname + '.dbo.[' + #TableName + '] WITH SAMPLE ' + CONVERT(varchar(3), #sample) + ' PERCENT'
EXEC sp_executesql #statement = #SQL
SELECT TOP 1 #ID = TableID, #TableName = TableName
FROM ##Tables
WHERE TableID > #ID
ORDER BY TableID
SET #RowCnt = ##ROWCOUNT
END
DROP TABLE ##Tables
GO
This will update stats on all the tables in the DB. You should also look at indexes and rebuild / defrag as nexessary
Raj
Try here
This should speed up your indices and key distribution. Re-analyzing table statistics optimises SQL Server's choice of index for queries, especially for large datasets
Definitely make yourself a weekly task that runs automatically to update the database's statistics.
Normal Data Growth is good enough as a reson to justify a slowdown of pretty much any not optimized query.
Scalability issues related db size won't manifest till the data volume grows.
Post your query + rough data volume and we'll help you to see what's what.
We've had a very similar problem with MSSQL 2005 and suddenly slow running queries.
Here's how we solved it: we added (nolock) for every select statement in the query. For example:
select count(*) from SalesHistory with(nolock)
Note that nolock should also be added to nested select statements, as well as joins. Here's an article that gives more details about how performance is increased when using nolock. http://www.mollerus.net/tom/blog/2008/03/using_mssqls_nolock_for_faster_queries.html
Don't forget to keep a backup of your original query obviously. Please give it a try and let me know.