I need to create a SQL Server database that will recieve updates by some replication mechanism from another database. I need to write insert, update and delete triggers that will execute when this replilcation occurs.
I have experience with triggers but not with replication.
Should I use Transactional or Merge replication, or does it matter?
Will a trigger designed to run when a simple SQL insert statement is executed also run when replication occurs?
The CREATE TRIGGER syntax on MSDN:
CREATE TRIGGER
...
[ NOT FOR REPLICATION ]
This indicates that executing on replication is the default behaviour for triggers, and can be disabled by specifying NOT FOR REPLICATION.
Well it depends.
If the updates that you intend to apply are to isolated tables i.e. all the data for a given table comes from the publisher only, then you can use transactional replication.
If on the other hand you are looking to combine table content i.e. an orders table, with orders being placed at both sites, then you would want to look into using merge replication.
With regard to triggers, there is a "not for replication" configuration that you can apply to control their behaviour. See the following article for reference.
http://msdn.microsoft.com/en-us/library/ms152529.aspx
Cheers, John
It's hard to answer your question with the information you've provided. I added a few comments to your question asking for clarifying information.
Here is an article on MSDN that should help: http://msdn.microsoft.com/en-us/library/ms152529.aspx
By default, triggers will fire during replication unless "NOT FOR REPLICATION" is specified. They work the same way as they do for simple insert statements.
Transactional and Merge replication are very different, but triggers behave similarly for both options.
There are a few alternative options open to you instead of triggers.
You could modify the replication procedures on the subscriber (destination) database.
If using 2008 you can use Change Tracking on the subscriber for tables you want to "do something with" and then create a batch process to deal with "set based" data instead of invididual rows. E.g. an SSIS package that runs every X.
Related
I am looking for the possibility to capture the name of a table on CREATE TABLE, DROP TABLE and other operations in my postgres database.
I looked into event triggers and they seem to only be able to capture these events on ddl_command_end (https://www.postgresql.org/docs/current/functions-event-triggers.html#PG-EVENT-TRIGGER-SQL-DROP-FUNCTIONS), which should work for the CREATE case but not all of the others.
So I wanted to ask, if there exists a possibility to either get the data from a dropped table (as I would need it) or get the information before the event happens.
Thank you for your help!
In Oracle we have a option to recover table but in PostgreSQL we do not have that option. Only thing you can do for this kind of situation enabling archiving and following PITR steps. It could be on other server or on the server that your database running. It depends on significance of the dropped table and the database.
Is there any way to log the changes made in Schema of a Table whenever I do the schema changes?
I was reading an article here about DDL Triggers. But it does not tell about the specific changes made in schema of a table.
this would be very difficult as quite often in SSMS the table is actually dropped and rebuilt in the background (depending on the complexity of the schema change & whether or not you enabled the "Prevent saving changes that require the table to be re-created " option in SSMS) - logging all the different types of changes would be a nightmare. (constraints being dropped, only to be re-created - bulk re-inserts, renames etc when all you might have done is re-arranged columns in joined table)
If you're serious about tracking schema changes i'd strongly recommend you script the schema (using the generate scripts option in MSSMS) & check the resulting file into SVN / SourceSafe / TFS & use the many comparison tools available for those systems.
OR, you can use 3rd party products that do all this for you, such as Red Gates SQL Source Control:
http://www.red-gate.com/products/sql-development/sql-source-control/
Edit: You may find this useful - it makes use of the Service Broker (SQL 2005+) and SSB queues:
http://www.mssqltips.com/sqlservertip/2121/event-notifications-in-sql-server-for-tracking-changes/
For this issue i would probably use Event Notifications. Although DDL trigger's in my opinion do tell about specific changes made to table, just trigger definition:
Create Trigger tr_DDLNotikums
On DataBase
For **DDL_DATABASE_LEVEL_EVENTS**
Use DDL Trigger In Below Format
CREATE TRIGGER tr_DDL_Database ON DATABASE
FOR DDL_SCHEMA_EVENTS
AS Begin
Insert Into LogTable (XmlColumn)
SELECT EVENTDATA()
End
Is there a way to see the history or any other information of insertions into a specific table of an SQL Server database?
Unless you are recording this information somewhere using a trigger, you would need some way of looking at the information in the transaction log. There are commercial tools like Lumigent for this.
You could use a trigger
Create a trigger on the table watching for inserts, updates, and deletes). The trigger would insert into another table (a history table).
This adds extra overhead, though, so I wouldn't do this on a really heavily updated table.
Look at this page for an example of how this is done.
This page has some code that generates the audit trail code for you.
Here is another SOF question about doing this using triggers.
If you are using SQL Server 2008, you can use the new Change Data Capture feature. This saves you from having to write triggers on all your tables.
For 2005 use triggers, for 2008 you can use the change data capture.
Aside from using a trigger, you could do something like add a column named "InsertedDate" and record the current date there. This would require you do your insertions through a stored procedure though.
Is there a way to replicate a sql server database but not push out deletes to the subscribers?
You don't mention which version of SQL Server you're running, but Andy Warren wrote an article on configuring INSERT, UPDATE, and DELETE behaviour in SQL Server 2005. You can configure this through the GUI, using his instructions:
http://www.sqlservercentral.com/articles/Replication/3202/
It's tempting to 'intervene' in a normal replication and 'disarm' the subscriber's side delete stored procedures, but this leaves no option to recover from replication failure. If the replication tries to recover, a reinitialize may be needed and this will drop any 'stale' data that the replication agent considers deleted.
An alternative is to use a normal replication, and use a script that generates insert and update triggers on all tables in the subscriber database, that insert/update that data into yet a third database. This way the third DB will collect all the data that ever existed, the second DB can re-initialize it's subscription if it needs to (when you do, just remember that bulk inserts don't call the insert trigger and check for new data and add it to the third DB), and the first DB doesn't have to perform the extra work that the triggers are.
Do this....Drop the article. Create a new storedprocedure in the corresponding database that mimicks the system store procedure (sp_del...) and contains the same parameter but does nothing. Add the article again...and set the delete store procedure under the article's properties to the new delete stored procedure that you created....
Or you can select Do not replicate Delete Statements....I think that works but i haven't tried it.
I have a problem with a Database at my work. There is currently auditing in place, but its clunky, requires a lot of maintence, and it falls short in a few regards. So I am replacing it.
I want to do this in as generic of a way as possible and have designed the tables, and how everything will link and be updated.
Now, thats all fine and good, but I want to be able to write a generic way to insert records into these audit tables. (Without having to enter a command for each column in each table being changed.)
Is there anyway within a Stored Procedure to iterate over all the columns in a table? And I would like to write this in such a way that it will work with several tables, and automatically pickup and audit added columns and such.
Any ideas?
EDIT: Guess I should Clarify. I will be auditing data that is in the tables. But I will be using the same table(s) to store the audited data for every table in the database.
And I can not use Triggers because usually, when an update occurs, it occurs across multiple tables, but I would like all of these updates to be part of a single Change Set.
That is not a problem, because I can do all the Updates from within a single Stored Proc. I would just prefer some way like a loop, that i can get all the updated fields, figure out which ones changed, and the insert those changed ones into the audit table.
And I would like to do this without have a long list of if statements and insert statements for each column. (By doing this in a generic loop, it will handle added columns automatically and not be bothered by deleted columns)
By "added columns" I guess you are looking to audit DDL. If you use SQL 2005, then you want this link.
If don't use SQL 2005, then you probably want to either use one of the many SQL schema comparison tools, like SQL Red Gates tool set probably has something in there.
If you don't have $ for tools, then you might just want to run periodic queries against information_schema.tables and information_schema.columns. By periodically capturing these in permenant tables, you can identify when they have gained or lost rows (and hence a schema changed occured)
If you are doing data audit instead, then you'll want want to code generate some triggers, again using information_schema.tables and information_schema.columns.
There would be performance considerations, but you could add insert and update triggers to all of your tables, and have the triggers insert into your audit tables.
Use DDL Triggers (assuming you have SQL Server 2005+)!
http://www.sqlteam.com/article/using-ddl-triggers-in-sql-server-2005-to-capture-schema-changes
http://technet.microsoft.com/en-us/library/ms189871.aspx
That could be done if you were using a data access layer that could trap which tables and columns are being update and generating the insert statements for the audit table. In a stored procedure? Which stored procedure? Do you have a single one that does updates? Or are you creating one per table?
If it's an option for you, just upgrade to sql server 2008 and turn on Change Data Capture.