I'm using SQL Server 2012 Express and since I'm really used to PL/SQL it's a little hard to find some answers to my T-SQL questions.
What I have: about 7 tables with distinct columns and an additional one for logging inserted/updated/deleted values from the other 7.
Question: how can I create one trigger per table so that it stores the modified data on the Log table, considering I can't used Change Data Capture because I'm using the SQL Server Express edition?
Additional info: there is only two columns in the Logs table that I need help filling; the altered data from all the columns merged, example below:
CREATE TABLE USER_DATA
(
ID INT IDENTITY(1,1) NOT NULL,
NAME NVARCHAR2(25) NOT NULL,
PROFILE INT NOT NULL,
DATE_ADDED DATETIME2 NOT NULL
)
GO
CREATE TABLE AUDIT_LOG
(
ID INT IDENTITY(1,1) NOT NULL,
USER_ALTZ NVARCHAR(30) NOT NULL,
MACHINE SYSNAME NOT NULL,
DATE_ALTERERED DATETIME2 NOT NULL,
DATA_INSERTED XML,
DATA_DELETED XML
)
GO
The columns I need help filling are the last two (DATA_INSERTED and DATA_DELETED). I'm not even sure if the data type should be XML, but when someone either
INSERTS or UPDATES (new values only), all data inserted/updated on the all columns of USER_DATA should be merged somehow on the DATA_INSERTED.
DELETES or UPDATES (old values only), all data deleted/updated on the all columns of USER_DATA should be merged somehow on the DATA_DELETED.
Is it possible?
Use the inserted and deleted Tables
DML trigger statements use two special tables: the deleted table and
the inserted tables. SQL Server automatically creates and manages
these tables. You can use these temporary, memory-resident tables to
test the effects of certain data modifications and to set conditions
for DML trigger actions. You cannot directly modify the data in the
tables or perform data definition language (DDL) operations on the
tables, such as CREATE INDEX. In DML triggers, the inserted and
deleted tables are primarily used to perform the following: Extend
referential integrity between tables. Insert or update data in base
tables underlying a view. Test for errors and take action based on the
error. Find the difference between the state of a table before and
after a data modification and take actions based on that difference.
And
OUTPUT Clause (Transact-SQL)
Returns information from, or expressions based on, each row affected
by an INSERT, UPDATE, DELETE, or MERGE statement. These results can be
returned to the processing application for use in such things as
confirmation messages, archiving, and other such application
requirements. The results can also be inserted into a table or table
variable. Additionally, you can capture the results of an OUTPUT
clause in a nested INSERT, UPDATE, DELETE, or MERGE statement, and
insert those results into a target table or view.
Just posting because this is what solved my problem. As user #SeanLange said in the comments to my post, he said to me to use an "audit", which I didn't know it existed.
Googling it, led me to this Stackoverflow answer where the first link there is a procedure that creates triggers and "shadow" tables doing sort of what I needed (it didn't merge all values into one column, but it fits the job).
Related
I'm working with a decently large software platform that utilizes Microsoft SQL Server 2008 R2. I'm investigating a very rare bug where something in the database is updating ContactInfo's primary key (CID) to 0. The table has two primary keys. CID is related to a primary key in another table to store contact information.
Is there a way to make an existing update trigger capture what stored procedure is making an update to the table? Or even better, is there a way to capture Profiler data in an audit table, such as stored procedure execution statement with input parameters? We could continuously run a Profiler trace to try to catch the update in real time but the the infrequency of the bug would result in at least a few hundred gigs of trace data to be stored, which is not an option.
Below is my code for the existing audit table. There are 28 columns, so I just replaced them with [columns] for simplicity and space.
ALTER TRIGGER [dbo].[wt_ContactInfo_U]
ON [dbo].[ContactInfo]
FOR UPDATE AS
INSERT INTO [dbo].[ContactInfo_AUDIT] ( // columns )
SELECT // columns
FROM Deleted D
WHERE
CHECKSUM( // D.[columns]) NOT IN (SELECT CHECKSUM( // I.[columns])
FROM INSERTED I
WHERE I.[CID] = D.[CID])
I have a TABLE A with 3000 records with 25 columns. I want to have a history table called Table A history holding all the changes updates and deletes for me to look up any day. I usually use cursors. Now thought using triggers which I was not asked to. Do you have any other suggestions? Many thanks!
If your using tsql /SQL server and you can't use triggers, which is the only sure way to get every change, maybe use a stored procedure that is scheduled in job to run every x amount of time, the stored procedure using a MERGE statement with the two tables to get new records or changes. I would not suggest this if you need every single change without question.
CREATE TABLE dbo.TableA (id INT, Column1 nvarchar(30))
CREATE TABLE dbo.TableA_History (id INT, Column1 nvarchar(30), TimeStamp DateTime)
(this code isn't production, just the general idea)
Put the following code inside a stored procedure and use a Sql Server Job with a schedule on it.
MERGE INTO dbo.TableA_History
USING dbo.TableA
ON TableA_History.id = TableA.id AND TableA_History.Column1 = TableA.Column1
WHEN NOT MATCHED BY TARGET THEN
INSERT (id,Column1,TimeStamp) VALUES (TableA.id,TableA.Column1,GETDATE())
So basically if the record either doesn't exist or doesn't match meaning a column changed, insert the record into the history table.
It is possible to create history without triggers in some case, even if you are not using SQL Server 2016 and system-versioned table are not available.
In some cases, when you can identify for sure which routines are modifying your table, you can create history using OUTPUT INTO clause.
For example,
INSERT INTO [dbo].[MainTable]
OUTPUT inserted.[]
,...
,'I'
,GETUTCDATE()
,#CurrentUserID
INTO [dbo].[HistoryTable]
SELECT *
FROM ... ;
In routines, when you are using MERGE I like that we can use $action:
Is available only for the MERGE statement. Specifies a column of type
nvarchar(10) in the OUTPUT clause in a MERGE statement that returns
one of three values for each row: 'INSERT', 'UPDATE', or 'DELETE',
according to the action that was performed on that row.
It's very handy that we can add the user which is modifying the table. Using triggers you need to use session context or session variable to pass the user. In versioning table you need to add additional column to the main table in order to log the user as it only logs the current table columns (at least for now).
So, basically it depends on your data and application. If you have many sources of CRUD over the table, the trigger is the most secure way. If your table is very big and heavily used, using MERGE is not good as it my cause blocking and harm performance.
In our databases we are using all of the methods depending on the situation:
triggers for legacy
system-versioning for new development
direct OUTPUT in the history, when sure that data is modified only by given set of routines
I have a unique requirement - I have a data list which is in excel format and I import this data into SQL 2008 R2., once every year, using SQL's import functionality. In the table "Patient_Info", i have a primary key set on the column "MemberID" and when i import the data without any duplicates, all is well.
But some times, when i get this data, some of the patient's info gets repeated with updated address / telephone , etc., with the same MemberID and since I set this as primary key, this record gets left out without importing into the database and thus, i dont have an updated record for that patient.
EDIT
I am not sure how to achieve this, to update some of the rows which might have existing memberIDs and any pointer to this is greatly appreciated.
examples below:
List 1:
List 2:
This is not a terribly unique requirement.
One acceptable pattern you can use to resolve this problem would be to import your data into "staging" table. The staging table would have the same structure as the target table to which you're importing, but it would be a heap - it would not have a primary key.
Once the data is imported, you would then use queries to consolidate newer data records with older data records by MemberID.
Once you've consolidated all same MemberID records, there will be no duplicate MemberID values, and you can then insert all the staging table records into the target table.
EDIT
As #Panagiotis Kanavos suggests, you can use a SQL MERGE statement to both insert new records and update existing records from your staging table to the target table.
Assume that the Staging table is named Patient_Info_Stage, the target table is named Patient_Info, and that these tables have similar schemas. Also assume that field MemberId is the primary key of table Patient_Info.
The following MERGE statement will merge the staging table data into the target table:
BEGIN TRAN;
MERGE Patient_Info WITH (SERIALIZABLE) AS Target
USING Patient_Info_Stage AS Source
ON Target.MemberId = Source.MemberId
WHEN MATCHED THEN UPDATE
SET Target.FirstName = Source.FirstName
,Target.LastName = Source.LastName
,Target.Address = Source.Address
,Target.PhoneNumber = Source.PhoneNumber
WHEN NOT MATCHED THEN INSERT (
MemberID
,FirstName
,LastName
,Address
,PhoneNumber
) Values (
Source.MemberId
,Source.FirstName
,Source.LastName
,Source.Address
,Source.PhoneNumber
);
COMMIT TRAN;
*NOTE: The T-SQL MERGE operation is not atomic, and it is possible to get into a race condition with it. To insure it will work properly, do these things:
Ensure that your SQL Server is up-to-date with service packs and patches (current rev of SQL Server 2008 R2 is SP3, version 10.50.6000.34).
Wrap your MERGE in a transaction (BEGIN TRAN;, COMMIT TRAN;)
Use SERIALIZABLE hint to help prevent a potential race condition with the T-SQL MERGE statement.
I have a Microsoft SQL Server 2012 database with multiple tables.
All tables contain the same two columns DataRowModified (type datetime) and DataRowLastAuthor (type nvarchar(MAX)). And no, I can't put all those columns into a separate table, it's a requirement that each table directly contains those rows.
I wrote the trigger below for the table Events to automatically update the values of those two columns whenever a row gets updated:
CREATE TRIGGER [dbo].[Trigger_Events_UpdateMetadata]
ON [dbo].[Events]
FOR UPDATE
AS
BEGIN
UPDATE [dbo].[Events]
SET [DataRowModified] = GETDATE(),
[DataRowLastAuthor] = ORIGINAL_LOGIN()
WHERE [Id] IN (SELECT [Id] FROM INSERTED)
END
Now my question is whether I have to copy (and rename) this trigger for every table I have to use it with, or can I somehow write a global trigger that works on all (or a specified set of) tables? It has to know in which table/row the update happened though, because it has to modify it.
What would be the easiest way to implement an automatically maintained LastAuthor and LastModificationDate column into many tables as described?
A trigger in SQL Server is always bound to a single table - you cannot have "global" triggers or triggers attached to multiple tables at once.
If you need a trigger on your 50 tables - you need to write 50 trigger, one each for every table. No way around this.
The only way to avoid this would be to update those columns in your database layer of your application, so that those values would already be present when you save your row of data. Things like Entity Framework allow such "bulk operations" on multiple entities to e.g. update a last modified date and last user to modify the entity.
No, But multiple triggers could invoke the same stored procedure.
I'm working on an application that is used to store payment information. We currently have a Transaction Audit table that works thusly:
Anytime a field changes in any table under audit we write an audit row that contains: 1 the table name, the field name, the old value, the new value and the timestamp. One insert takes place per field changed per row being updated.
I've always avoided Triggers in SQL Server since they're hard to document and can make troubleshooting more difficult as well, but is this a good use case for a trigger?
Currently the application determines all audit rows that need to be added on its own and sends hundreds of thousands of audit row INSERT statements to the server at times. This is really slow and not really maintainable for us.
Take a look at Change Data Capture if you are running Enterprise edition. It provides the DML audit trail you're looking for without the overhead associated with triggers or custom user/timestamp logging.
I have worked on financial systems where each table under audit had it's own audit table (e.g. for USERS there was USERS_AUDIT), with the same schema (minus primary key) plus:
A char(1) column to indicate the type of change ('I' = insert, 'U' = update, 'D' = delete)
A datetime column with a default value of GETDATE()
A varchar(255) column indicating the user who made the change (defaulting to USER_ID())
These tables were always inserted into (append-only) by triggers on the table under audit. This will result in fewer inserts for you and better performance, at the cost of having to administer many more audit tables.
I've implemented audit logic in SPROCS before, but same idea applies to doing it in Triggers.
Working Table: (id, field1, field2, field3, ... field-n)
History Table: (userID, Date/time, action (CUD), id, field1, field2, field3, ... field-n)
This also allows for easy querying to see how data historically changed.
Each time a row in a table is changed, a record is created in History table.
Some of our tables are very large - 100+ fields, so 100+ inserts would be too intense a load and also no meaningful way to quickly see what happened to data.