what is the best way to replicate tables from oracle database on primary server to similar database on secondary server and vice versa.
i have tried using oracle streams but issue is i have triggers on tables and my requirement is to replicate data from these tables to database on secondary server and vice versa.As soon as the data is inserted in tab1 of sourceDB same is updated in tab2 of destiDB also the trigger on tab1 of destieDB gets triggered .This triggers should not triggered
basic idea being data availability.
Please suggest if this is correct way or i need to use some other way
You can use GoldenGate with SUPPRESSTRIGGERS option.
But it depends on your oracle version (not working on 11.1).
Adding DBOPTIONS SUPPRESSTRIGGERS to your replicat process configuration will prevent the trigger being executed on the target DB.
You could code the triggers in a way you can disable it on a per User or per Session basis see Stack Overflow: Disable Trigger per Session
But the better solution for a variety of concurrency and data consistency problems would be to use a professional backup/mirroring solution.
Related
I am looking for the possibility to capture the name of a table on CREATE TABLE, DROP TABLE and other operations in my postgres database.
I looked into event triggers and they seem to only be able to capture these events on ddl_command_end (https://www.postgresql.org/docs/current/functions-event-triggers.html#PG-EVENT-TRIGGER-SQL-DROP-FUNCTIONS), which should work for the CREATE case but not all of the others.
So I wanted to ask, if there exists a possibility to either get the data from a dropped table (as I would need it) or get the information before the event happens.
Thank you for your help!
In Oracle we have a option to recover table but in PostgreSQL we do not have that option. Only thing you can do for this kind of situation enabling archiving and following PITR steps. It could be on other server or on the server that your database running. It depends on significance of the dropped table and the database.
. I have two databases in same azure sql server .i want that both database interact to each other using trigger. i.e If any record is inserted in Customer table of first database the trigger gets fired and record is inserted in another database.
We had / have the same problem with triggers that we use for insert-update-delete where we write a record to Database-1 that has the primary table, but also updates Database-2 where we hold "archive" versions of the tables.
The only solution we have identified and are testing is to bring all of the tables into a single database and separate the different tables under separate database schemas in the one database.
Analysis so far of this approach looks promising.
I think what you're trying to do is not allowed in Sql Azure. From my expertise what you are trying to do is a bad practice on-premise as well (think backups-restore and availability issue scenarios).
You should move the dependency in the application and have the application update both databases, as appropriate.
Anyway, if you want to continue with this approach please take a look over Elastic Query feature: https://learn.microsoft.com/en-in/azure/sql-database/sql-database-elastic-query-overview
Please let me know if I can help with something
I'm upgrading my SQL Server 2000 database to SQL Server 2008 R2. I want to make use of Change Data Capture feature. Im my existing application I have the similar functionality, but I'm using triggers and historical table with Hst_ prefix with almost similar schema as the original tables.
My question is: is there any way to migrate my data from Hst_ tables to the tables used by CDC feature?
I was thinking of doing that like this:
I have the table Cases.
I'm using my custom historization mechanism , so I also have also three triggers (on insert, update and delete) and a twin table Hst_Cases.
Now I'm enabling CDC on table Cases
CDC creates function, which returns historical data (fn_cdc_get_all_changes_dbo_Cases) and also a system table, which actually holds the data (cdc.dbo_Cases_CT).
I could insert data from Hst_Cases to cdc.dbo_Cases_CT, but I have the following problems:
I don't know how to get __$start_lsn and __$seqval.
It is difficult to figure out __$update_mask (I have to compare each two rows).
Is there the only way to do that? I want to avoid the situation then I join "new" historical data with the "old" historical data from Hst_ tables.
Thanks!
You typically don't want to use the capture tables to store long-term change data, it would be better to have an SSIS package move the capture data to permananent tables. If you do use them, I think if you ever have to restore your database, they'll be empty after restore unless you use the KEEP_CDC option when restoring. You'll also need to disable the job that automatically purges the capture tables.
If you create your own tables for storage, you can omit the lsn and mask fields.
Is there any handy tool that can make updating tables easier? Usually I got an Excel file with the original value in one column and new value in another column. Then I write a formula in Excel to create the 'update' statement. Is there any way to simplify the updating task?
I believe the approach in SQL server 2000 and 2005 would be different, so could we discuss them both? Thanks.
In addition, these updates usually request by "non-programmer" (which means they don't understand SQL, so it may not feasible to let them do query), is there any tool that can let them update the table directly without having DBAs do this task? Also, that tool needs to limit the privilege to only modify certain tables. And better has a way rollback the change.
Create a DTS package that will import a csv file, make the updates and then archives the file. The user can drop the file in a specific folder designated for the task or this can be done by an ops person. Schedule the DTS to run every hour, day, etc.
In case your users would insist that they keep using Excel, you've got several different possibilities of getting the data transferred to SQL Server. My preferred one would be to use DTS/SSIS, as mentioned by buckbova.
However, another method is by using OPENROWSET(), which makes it possible to query your Excel file as if it was a table. I wrote a small article about it here: http://blog.hoegaerden.be/2010/03/29/retrieving-data-from-excel/
Another approach that hasn't been mentioned yet (I'm not a big fan of letting regular users edit data directly in the DB), any possibility of creating a small custom application for them?
There you go, a couple more possible solutions :-)
Valentino.
I think the best approach is to expose a view on your data accessible to users who are allowed to do updates, and set up triggers on the view to perform the actual updates on the underlying data. Restrict change to only the columns they should be changing.
This technique can work on SQL Server 2000 and 2005.
I would add audit triggers on the underlying tables so you can always track changes.
You'll have complete control, and they can connect to it with Access or whatever and perform their maintenance.
You could create some accounts in SQL Server for these users and limit their access to only certain tables and columns along with onlu select / update / insert privileges. Then you could create an access database with linked tables to these.
I need to create a SQL Server database that will recieve updates by some replication mechanism from another database. I need to write insert, update and delete triggers that will execute when this replilcation occurs.
I have experience with triggers but not with replication.
Should I use Transactional or Merge replication, or does it matter?
Will a trigger designed to run when a simple SQL insert statement is executed also run when replication occurs?
The CREATE TRIGGER syntax on MSDN:
CREATE TRIGGER
...
[ NOT FOR REPLICATION ]
This indicates that executing on replication is the default behaviour for triggers, and can be disabled by specifying NOT FOR REPLICATION.
Well it depends.
If the updates that you intend to apply are to isolated tables i.e. all the data for a given table comes from the publisher only, then you can use transactional replication.
If on the other hand you are looking to combine table content i.e. an orders table, with orders being placed at both sites, then you would want to look into using merge replication.
With regard to triggers, there is a "not for replication" configuration that you can apply to control their behaviour. See the following article for reference.
http://msdn.microsoft.com/en-us/library/ms152529.aspx
Cheers, John
It's hard to answer your question with the information you've provided. I added a few comments to your question asking for clarifying information.
Here is an article on MSDN that should help: http://msdn.microsoft.com/en-us/library/ms152529.aspx
By default, triggers will fire during replication unless "NOT FOR REPLICATION" is specified. They work the same way as they do for simple insert statements.
Transactional and Merge replication are very different, but triggers behave similarly for both options.
There are a few alternative options open to you instead of triggers.
You could modify the replication procedures on the subscriber (destination) database.
If using 2008 you can use Change Tracking on the subscriber for tables you want to "do something with" and then create a batch process to deal with "set based" data instead of invididual rows. E.g. an SSIS package that runs every X.