SQL Server SysTable with tstamp of last inserted row of each table - sql-server

Is there any system table or dmv in SQL Server 2008 R2 that contains information regarding the last DML statement (except select) that was issued against any user table?
I see that in sys.tables there is a modify_date column but that's just for any table alteration (DDL statements).
I wouldn't want to create triggers on every table in the db nor a trigger on the database level for this scope.
The reason for this is that I would like to see when was the last time an insert, update or delete statement was made into each table in order to see if I can drop some of the tables that are no longer used - this is for a DWH db, where each table in the db is supposed to have any of these 3 operations at least once a week/month/quarter/year.

Option 1:
Enabling Change Data Capture for your DB.
Refer the below link for CDC:
http://technet.microsoft.com/en-us/library/cc627369%28v=sql.105%29.aspx
Option 2:
Create trigger for each table and do logging in common table whenever INSERT/UPDATE/DELETE happens in any table(Old traditional method).

Related

Using a SQL Server trigger to find what procedure makes an update

I'm working with a decently large software platform that utilizes Microsoft SQL Server 2008 R2. I'm investigating a very rare bug where something in the database is updating ContactInfo's primary key (CID) to 0. The table has two primary keys. CID is related to a primary key in another table to store contact information.
Is there a way to make an existing update trigger capture what stored procedure is making an update to the table? Or even better, is there a way to capture Profiler data in an audit table, such as stored procedure execution statement with input parameters? We could continuously run a Profiler trace to try to catch the update in real time but the the infrequency of the bug would result in at least a few hundred gigs of trace data to be stored, which is not an option.
Below is my code for the existing audit table. There are 28 columns, so I just replaced them with [columns] for simplicity and space.
ALTER TRIGGER [dbo].[wt_ContactInfo_U]
ON [dbo].[ContactInfo]
FOR UPDATE AS
INSERT INTO [dbo].[ContactInfo_AUDIT] ( // columns )
SELECT // columns
FROM Deleted D
WHERE
CHECKSUM( // D.[columns]) NOT IN (SELECT CHECKSUM( // I.[columns])
FROM INSERTED I
WHERE I.[CID] = D.[CID])

Trigger to log inserted/updated/deleted values SQL Server 2012

I'm using SQL Server 2012 Express and since I'm really used to PL/SQL it's a little hard to find some answers to my T-SQL questions.
What I have: about 7 tables with distinct columns and an additional one for logging inserted/updated/deleted values from the other 7.
Question: how can I create one trigger per table so that it stores the modified data on the Log table, considering I can't used Change Data Capture because I'm using the SQL Server Express edition?
Additional info: there is only two columns in the Logs table that I need help filling; the altered data from all the columns merged, example below:
CREATE TABLE USER_DATA
(
ID INT IDENTITY(1,1) NOT NULL,
NAME NVARCHAR2(25) NOT NULL,
PROFILE INT NOT NULL,
DATE_ADDED DATETIME2 NOT NULL
)
GO
CREATE TABLE AUDIT_LOG
(
ID INT IDENTITY(1,1) NOT NULL,
USER_ALTZ NVARCHAR(30) NOT NULL,
MACHINE SYSNAME NOT NULL,
DATE_ALTERERED DATETIME2 NOT NULL,
DATA_INSERTED XML,
DATA_DELETED XML
)
GO
The columns I need help filling are the last two (DATA_INSERTED and DATA_DELETED). I'm not even sure if the data type should be XML, but when someone either
INSERTS or UPDATES (new values only), all data inserted/updated on the all columns of USER_DATA should be merged somehow on the DATA_INSERTED.
DELETES or UPDATES (old values only), all data deleted/updated on the all columns of USER_DATA should be merged somehow on the DATA_DELETED.
Is it possible?
Use the inserted and deleted Tables
DML trigger statements use two special tables: the deleted table and
the inserted tables. SQL Server automatically creates and manages
these tables. You can use these temporary, memory-resident tables to
test the effects of certain data modifications and to set conditions
for DML trigger actions. You cannot directly modify the data in the
tables or perform data definition language (DDL) operations on the
tables, such as CREATE INDEX. In DML triggers, the inserted and
deleted tables are primarily used to perform the following: Extend
referential integrity between tables. Insert or update data in base
tables underlying a view. Test for errors and take action based on the
error. Find the difference between the state of a table before and
after a data modification and take actions based on that difference.
And
OUTPUT Clause (Transact-SQL)
Returns information from, or expressions based on, each row affected
by an INSERT, UPDATE, DELETE, or MERGE statement. These results can be
returned to the processing application for use in such things as
confirmation messages, archiving, and other such application
requirements. The results can also be inserted into a table or table
variable. Additionally, you can capture the results of an OUTPUT
clause in a nested INSERT, UPDATE, DELETE, or MERGE statement, and
insert those results into a target table or view.
Just posting because this is what solved my problem. As user #SeanLange said in the comments to my post, he said to me to use an "audit", which I didn't know it existed.
Googling it, led me to this Stackoverflow answer where the first link there is a procedure that creates triggers and "shadow" tables doing sort of what I needed (it didn't merge all values into one column, but it fits the job).

Can the INSERTED table contain values from two simultaneous transactions that fired the same trigger? SQL Server 2012

I have the same application on different hosts bulk inserting into the same table. Each bulk insert fires a trigger. The structure of the table is as follows:
Hostname Quantity
---------------------
BOX_1 10
BOX_1 15
BOX_1 20
BOX_1 11
If I have the following code as part of the trigger:
DECLARE #hostname VARCHAR(20)
SELECT #hostname = Hostname
FROM INSERTED
Each bulk insert contains only one hostname since the application is only capturing data from the box its running on, but if two machines bulk insert simultaneously into the same table, could the INSERTED table be a combination of bulk inserts from different machines?
Or will the triggers execute sequentially, meaning the INSERTED table will always contain data from only one application at a time?
I need to know if my code setting the #hostname variable has any possibility of not being confined to just one choice.
The INSERTED (and DELETED) table will only ever contain rows from the statement that caused the trigger to fire.
See here: https://msdn.microsoft.com/en-us/library/ms191300(v=sql.110).aspx
The inserted table stores copies of the affected rows during INSERT
and UPDATE statements. During an insert or update transaction, new
rows are added to both the inserted table and the trigger table. The
rows in the inserted table are copies of the new rows in the trigger
table.
The rows in these tables are effectively scoped to the insert/update/delete statement that caused the trigger to fire initially.
See also here for some more info: SQL Server Trigger Isolation / Scope Documentation
But bear in mind in your trigger design that some other insert or update operation (a manual bulk insert, or data maintenance) might cause your trigger to fire, and the assumption about the hostname may no longer hold. Probably this will be years down the line after you've moved on or forgotten about this trigger!

Trigger on table

I am interested to know about trigger execution in SQL Server.
I created an INSERT trigger for a table. And I insert 10 records from another table.
How many times the trigger called? One time or 10 times?
And how many records will be available in INSERTED table?
Triggers in SQL Server are called once per statement - so in your case:
the trigger is called ONCE for your INSERT statement
the pseudo table Inserted will contain all the 10 rows that you're inserting
See Data Points: Exploring SQL Server Triggers on MSDN Magazine for a more in-depth look at triggers

Changing column constraint null/not null = rowguid replication error

I have a database running under Sql server 2005 with merge replication. I want to change some of the FK columns to be 'not null' as they should always have a value. SQL server won't let me do that though, this is what it says:
Unable to modify table. It is invalid to drop the default constraint
on the rowguid column that is used by
merge replication. The schema change
failed during execution of an internal
replication procedure. For corrective
action, see the other error messages
that accompany this error message. The
transaction ended in the trigger. The
batch has been aborted.
I am not trying to change the constraints on the rowguid column at all, only on another column that is acting as a FK. Other columns I want to set to be not null because the record doesn't make any sense without that information (i.e. on a customer, the customer name).
Questions:
Is there a way to update columns to be 'not null' without turning off replication then turning it back on again?
Is this even the best way to do this - should I be using a constraint instead?
Apparently SSMS makes changes to tables by dropping them and recreating them. So just needed to make the changes using T-SQL statement.
ALTER TABLE dbo.MyTable ALTER COLUMN MyColumn nvarchar(50) NOT NULL
You need to script out your change in T-SQL statements as SQL Server Management Studio will look to drop and re-create the table, as opposed to simply adding the additional column.
You will also need to add the new column to your Publications.
Please note that changing a column in this manner can be detrimental to the performance of Replication. Dependent on the size of the table you are altering, can lead to a lot of data being replicated. Consider that although your table modification can be performed in a single statement, if 1 million rows are affected then 1 million updates will be generated at the Subscriber, NOT a single update statement as is commonly thought.
The hands on, improved performance approach.......
To perform this exercise you need to:
Backup your Replication environment by scripting out your entire configuration.
Remove the table from Replication at
both Publishers/Subscribers
Add the column at each
Publisher/Subscriber.
Apply the Update locally at each
Publisher/Subscriber.
Add the table back into Replication.
Validate that transactions are being
Replicated.

Resources