I have two SQL Server databases.
One is being used as the back-end for a Ruby-On-Rails system that we are transitioning from but is still in use because of the Ruby apps we are rewriting in ASP.NET MVC.
The databases have similar tables, but not identical, for the Users, Roles and Roles-Users tables.
I want to create some type of trigger to update the user and roles-users tables on each database when a modification is made on the other database of the same table.
I can't just use the users table on the original database because Ruby has a different hash function for the passwords, but I want to ensure that changes on one system are reflected on the other instanter.
I also want to avoid the obvious problem that an update on the one database triggers an update on the other which triggers an update on the first and the process repeats itself until the server crashes or something similarly undesirable happens or a deadlock occurs.
I do not want to use database replication.
Is there a somewhat simple way to do this on a transaction per transaction basis?
EDIT
The trigger would be conceptually something like this:
USE Original;
GO
CREATE TRIGGER dbo.user_update
ON dbo.user WITH EXECUTE AS [cross table user identity]
AFTER UPDATE
AS
BEGIN
UPDATE Another.dbo.users SET column1=value1, etc., WHERE inserted.ID = Another.dbo.users.ID;
END
The problem I am trying to avoid is a recursive call.
Another.dbo.users will have a similar trigger in place on it because the two databases have different types of applications, Ruby-On-Rails on the one and ASP.NET MVC on the other that may be working on data that should be the same on the two databases.
I would add a field to both tables if possible. When adding or updating a table the 'check' field would be set to 0. The trigger would look at this field and if it is 0, having been generated by an application event, then the trigger fires the insert/update into the second table but the check field would have a 1 instead of 0.
So when the trigger fires on the second table it will skip the insert back into table one.
This will solve the recursive problem.
If for some reason you can not add the check field, you can use a separate table with the primary key to the table and the check field. This need more coding but would work also.
Related
I am building a database for a CMS system and I am at a point where I am no longer sure which way to go anymore, noting that all of the business logic is in the database layer (We use PostgreSQL 13 and the application is planned to be a SaaS):
1- The application has folders and documents associated with them, so if we move a folder (Or a group of folders in bulk) from its parent folder to another, then the permissions of the folder as well as the underlying documents must follow the permissions of the new location (An update to a permissions table is sent), is this better enforced via an after statement trigger, or do we need to force all of the code to call a single method to move the folder, documents and update their permissions.
2- Wouldn't make more sense to have an AFTER statement trigger rather than an AFTER row trigger in all cases since they do the same thing, but with statement triggers you can process all affected rows in bulk (Thus done more efficiently) , so if I was to enforce inserting a record in another table if an update or an insert takes place, it will have a similar performance for a a single row, but will be a lot faster if they were 1000 rows in the statement level trigger (Since I can easily do INSERT INTO .. SELECT * FORM new_table).
You need a row level trigger or a statement level trigger with transition tables, so that you know which rows were affected by the statement. To avoid repetition, the latter might be a better choice.
Rather than modifying permissions whenever you move an object, you could figure out the permissions when you query the table by recursively following the chain of containment. The question here is if you prefer to do the extra work when you modify the data or when you query the data.
Basically what I'm trying to do is create a dynamic trigger where if a table from database 1 has a new record inputed. if it falls in the category of data that I need for database 2, it automatically populates the table in database 2 without me needed to manually update.
Right now I am going into the table in database 1 sorting for the category I need and copying the data I need into the table in database 2.
I tried to make this process easier by doing a select query for the columns that I need from database 1 to database 2, which works fine however it overwrites what I have already and I have to basically recreate everytime.
So after all that rambling I guess exactly what I need to know. Is there a way to create a trigger that if a new line item is inputed in database 1 with the tag matching the type of material I need to transfer to database 2. Also on top of that I only need to transfer 2 columns from database 1 to database 2.
I would try to post a sample code, however I have no idea where to start on this.
I suggest you look into Service Broker messaging. We use it quite a bit and it works quite well. You can send messages to the other database with the data that needs to be inserted and allow the second database to do all the work. This will alleviate the worries about the second database being offline or causing an error which rolls back into your trigger. If the second database is unavailable the messages will queue up in your database until it can send them. This isn't the easiest thing to set up but is a way to keep the two databases from being so closely tied together.
Service Broker
I am unclear about the logic in your selection but if you want to save a copy of what was just inserted into table1 into a table (table2) on another database, using a trigger, you can try this:
create trigger trig1 on dbo.table1
after insert as
insert into database2.dbo.table2 (col1,col2,col3) values (inserted.col1, inserted.col2)`
You could use an AFTER INSERT Trigger like this:
CREATE TRIGGER [FirstDB].[dbo].[YourTrigger]
ON [FirstDB].[dbo].[Table]
AFTER INSERT
AS
BEGIN
INSERT INTO [OtherDB].[dbo].[Table] SELECT (values...)
END
I recommend you consider non-trigger alternatives as well though. Cross-DB triggers could be risky (what if the other db is offline, etc.)
I am working on an MVC 4 project which requires me to move data from an active table to a table for archived content.
I understand the with Entity Framework, the tables are tightly bounded with the models. I have created two models - one for the active records and one for the archived records.
What is the best way to add all the data in active table to archive and remove all the contents in active table for fresh use?
P.S: I am a bit paranoid about the error tolerance here, as I may be dealing with around 30000 records at a time. I need to successfully move all records to archive and ensure deleting them only after successful copy.
Even though you are using Entity Framework, you can still use Store Procedures. This is a good case to use a stored procedure, as you can do a set based operation in the sproc (fast), rather than iterating through all of the records in code (slow).
Here are some steps for how to add the sproc to the EF (you could just Google this too): Adding stored procedures complex types in Entity Framework
Your sproc would probably look something like:
SET IDENTITY_INSERT ON dbo.ArchiveTable --Assuming you have an identity column
INSERT INTO dbo.ArchiveTable(
Col1
,Col2
)
SELECT
Col1
,Col2
FROM dbo.MainTable
SET IDENTITY_INSERT OFF dbo.ArchiveTable --Assuming you have an identity column
DELETE * FROM dbo.MainTable
Wrap that in a transaction (to satisfy your error tolerance) and that should be a pretty quick execution for 30,000+ records. I would recommend that you return something like the number of records affected or something like that, all of which you should be able to return from the stored procedure.
Unless you have to, don't do it in code - wilsjd's answer is right on - create a transacted stored procedure that you call straight from EF.
But if you have a reason to do it in code - say because you don't have a good way to access both tables from within a stored procedure, just be sure to understand and do the right thing with the transaction from within your code. This is a good answer discussing this:
Entity Framework - Using Transactions or SaveChanges(false) and AcceptAllChanges()
i have a table named "LogDelete" to save information about users that deleted any rows on any tables. the table fields are like this :
create table LogDelete
(
pk int identity(1,1) primary key,
TableName varchar(15),
DeleteUser nvarchar(20),
DeleteDate datetime
)
Actually i wanna create a trigger that fire on all tables on update action that on every update write proper information on LogDelete Table,
at now i use a stored procedure and call it on every update action on my tables.
Is there a way to do this?
No. There are 'event' triggers, but they are mainly related to loggin in. These kinds of triggers are actually DDL triggers, so they are not related to updating data, but to updating your database scheme.
Afaik, there is no trigger that fires on every update. That means that the way you are handling it now, through a stored procedure, is probably the best way. You can create triggers on each table to call the procedure and do the logging.
You might even write a script that creates all those triggers for you in one run. That will make the initial creating and later updating of the triggers a bit easier.
Here is some MSDN documentation, which says (in remarks about DML triggers):
CREATE TRIGGER must be the first statement in the batch and can apply to only one table.
There is no magic solution for your request, not such a thing as event triggers to all DML (INSERT, UPDATE, DELETE) as you like, but there are some alternatives that you can consider:
If you are using SQL Server 2008 or after, the best thing you could use is CDC (Change Data Capture), you can start with this article by Dave Pinal, I think this will be the best approach since it is not affected by any change in structures.
Read the log file. You'll need analyze it find each DML activity in the log and so you could build an unified operation to log the changes that you need, obviously this is not on-line and not trivial
Same as option two but using traces on all the DML activities. The advantage of this approach is that it can be almost online and it will not require analyzing the log file, you'll just need to analyze a profiler table.
What is the best way to track changes in a database table?
Imagine you got an application in which users (in the context of the application not DB users ) are able to change data which are store in some database table. What's the best way to track a history of all changes, so that you can show which user at what time change which data how?
In general, if your application is structured into layers, have the data access tier call a stored procedure on your database server to write a log of the database changes.
In languages that support such a thing aspect-oriented programming can be a good technique to use for this kind of application. Auditing database table changes is the kind of operation that you'll typically want to log for all operations, so AOP can work very nicely.
Bear in mind that logging database changes will create lots of data and will slow the system down. It may be sensible to use a message-queue solution and a separate database to perform the audit log, depending on the size of the application.
It's also perfectly feasible to use stored procedures to handle this, although there may be a bit of work involved passing user credentials through to the database itself.
You've got a few issues here that don't relate well to each other.
At the basic database level you can track changes by having a separate table that gets an entry added to it via triggers on INSERT/UPDATE/DELETE statements. Thats the general way of tracking changes to a database table.
The other thing you want is to know which user made the change. Generally your triggers wouldn't know this. I'm assuming that if you want to know which user changed a piece of data then its possible that multiple users could change the same data.
There is no right way to do this, you'll probably want to have a separate table that your application code will insert a record into whenever a user updates some data in the other table, including user, timestamp and id of the changed record.
Make sure to use a transaction so you don't end up with cases where update gets done without the insert, or if you do the opposite order you don't end up with insert without the update.
One method I've seen quite often is to have audit tables. Then you can show just what's changed, what's changed and what it changed from, or whatever you heart desires :) Then you could write up a trigger to do the actual logging. Not too painful if done properly...
No matter how you do it, though, it kind of depends on how your users connect to the database. Are they using a single application user via a security context within the app, are they connecting using their own accounts on the domain, or does the app just have everyone connecting with a generic sql-account?
If you aren't able to get the user info from the database connection, it's a little more of a pain. And then you might look at doing the logging within the app, so if you have a process called "CreateOrder" or whatever, you can log to the Order_Audit table or whatever.
Doing it all within the app opens yourself up a little more to changes made from outside of the app, but if you have multiple apps all using the same data and you just wanted to see what changes were made by yours, maybe that's what you wanted... <shrug>
Good luck to you, though!
--Kevin
In researching this same question, I found a discussion here very useful. It suggests having a parallel table set for tracking changes, where each change-tracking table has the same columns as what it's tracking, plus columns for who changed it, when, and if it's been deleted. (It should be possible to generate the schema for this more-or-less automatically by using a regexed-up version of your pre-existing scripts.)
Suppose I have a Person Table with 10 columns which include PersonSid and UpdateDate. Now, I want to keep track of any updates in Person Table.
Here is the simple technique I used:
Create a person_log table
create table person_log(date datetime2, sid int);
Create a trigger on Person table that will insert a row into person_log table whenever Person table gets updated:
create trigger tr on dbo.Person
for update
as
insert into person_log(date, sid) select updatedDTTM, PersonSID from inserted
After any updates, query person_log table and you will be able to see personSid that got updated.
Same you can do for Insert, delete.
Above example is for SQL, let me know in case of any queries or use this link :
https://web.archive.org/web/20211020134839/https://www.4guysfromrolla.com/webtech/042507-1.shtml
A trace log in a separate table (with an ID column, possibly with timestamps)?
Are you going to want to undo the changes as well - perhaps pre-create the undo statement (a DELETE for every INSERT, an (un-) UPDATE for every normal UPDATE) and save that in the trace?
Let's try with this open source component:
https://tabledependency.codeplex.com/
TableDependency is a generic C# component used to receive notifications when the content of a specified database table change.
If all changes from php. You may use class to log evry INSERT/UPDATE/DELETE before query. It will be save action, table, column, newValue, oldValue, date, system(if need), ip, UserAgent, clumnReference, operatorReference, valueReference. All tables/columns/actions that need to log are configurable.