There are 2 Oracle databases with the same tables on two different servers. Database A on server A and Database B on server B
A: Table1,Table2
B: Table1,Table2
These tables are identical to each other, in that it's possible to copy the data tables from server A to server B. We only have remote access to the database on Server A, but full admin access to Server B.
So, database A is remote and has a primary and secondary DB. The secondary DB gets updated with the latest data, then we must switch the DB we're pointing to towards secondary. Same for the opposite, when things are pointing to secondary, primary DB will get updated with data, then the system should point to Database A primary. This way the database is never down we just switch what we point to after the other one gets updated.
There is also a web interface that sets a config file whether we're pointing to the primary or the secondary DB
Now, I'm trying to come up with solutions on how to get/synchronize/copy/point to the correct database to get the updated data either directly from A or by copying data A to B.
Solution 1) We read db config file and point directly to the primary (or secondary) DB
Solution 2) Build a windows service that will detect when primary or secondary DB gets updated, and copy tables from DataBase A to DataBase B.
Solution 3) Have a "copy tables" button on the web interface that will start a background task of copying data from A to B using http://hangfire.io
Solution 4) ??
So far, I like the idea of solution 1 the best, where we just point directly to either the primary or secondary DB based on configuration. This idea doesn't involve any copying to the identical tables on Server B.
However, I was wondering if there is any other possible solution? Would triggers, replication, or something else work here?
Thanks.
Related
I am in search of a better solution, I'd like to make a database where there are some local tables/sprocs/views that should be used as the default. As a fall back use a remote database.
The setup I have right now is to copy the schema and/or data of the tables I don't want to change in the remote database. Then create a view for all the views and tables I want read-only access to. The sprocs are just a copy from the remote database as well. This cuts down on time copying all the data as well.
What I am wondering is if there is a way at a lower level to tell SQL Server to fall back on the remote database if an object is not found, then fail if the object is still not found?
We had an intern who was given written instructions for deleting old data from a database based on dates (from within our ERP system). They were fascinated by the results and just kept deleting instead of stopping at the required date. There are now 4 years of missing records in the production database. I have these records in my development database, which is in a different instance on a different server. Is there a way to transfer just those 4 years worth of data from my development database to my production database, checking, of course, to make sure there are no duplicates (unique index on transaction number).
I haven't tried anything yet because I'm not sure where to start. I do have a test database on the same instance as the production database that I could use to test the transfer with.
There are several ways to do this. Assuming that this is on a different machine, you will want to create a Linked Server on your dev machine to link to the target server (Or, technically, a link from the production server to your dev machine could be used as well). Then, perform an insert of the selected records from the source to the target.
More efficiently, you can use the Export Data functionality. Right click on the database (Not the server / instance, but the database) and select Tasks / Export Data from the popup menu. This will pop up the SQL Server Import and Export Wizard. Use your query above to select the data for export.
If security considerations interfere with this, create a duplicate of the table(s) with alternate names (e.g. MyInvRecords) in a new database, and export the data into those tables. Back up that DB, transfer it to someplace accessible from the target server, restore that DB, then transfer the rows back into the original DB.
I haven't had to use anything but these methods before, so one of them should work for you.
A basic insert will work just fine.
Insert ProdDB.schema.YourTable
([Columns])
select ([Columns])
from TestDB.schema.YourTable
where YourDateRange predicates here
Im wondering something regarding this article:
https://learn.microsoft.com/en-us/azure/sql-database/sql-database-manage-application-rolling-upgrade
We would like to perform a database upgrade involving ~3 million records in a table. The upgrade will add an extr column to the mentioned table, which can take up to 5 minutes to complete.
In short, Microsoft suggest creating a transactionally consistent database copy of the target database, perform the database upgrade/migration and switch users to that copy using a load balancer.
This seems all and well, but records created in the original database will not be present in the upgraded/migrated database copy.
Turn the primary database to read-write mode and run the upgrade script in the stage slot (5). - is what the article suggest.
If the primary database is read-write mode, won't i be missing data in the upgraded/migrated copy of the primary database once i point everyone to the new database?
For example: would it be possible to sync database records from the primary to the secondary once the secondary is upgraded and front-end users are pointed to the secondary database?
Background information
Let's say I have two database servers, both SQL Server 2008.
One is in my LAN (ServerLocal), the other one is on a remote hosting environment (ServerRemote).
I have created a database on ServerLocal and have an exact copy of that database on ServerRemote. The database on ServerRemote is part of a web application and I would like to keep it's data up-to-date with the data in the database ServerLocal.
ServerLocal is able to communicate with ServerRemote, this is one-way traffic. Communication from ServerRemote to ServerLocal isn't available.
Current solution
I thought it would be a nice solution to use replication. So I've made ServerLocal a publisher and subscriptions are pushed to the ServerRemote. This works fine, when a snapshot is transfered to ServerRemote the existing data will be purged and the ServerRemote database is once again an exact replica of the database on ServerLocal.
The problem
Records that exist on ServerRemote that don't exist on ServerLocal are removed. This doesn't matter for most of my tables but in some of my tables I'd like to keep the existing data (aspnet_users for instance), and update the records if necessary.
What kind of replication fits my problem?
Option C: Transactional replication.
I've done this before where you have data in a subscription database and don't want it overwritten by the snapshot. You can set your initial snapshot to not delete the existing records and either don't create the records that are in the publisher (assume they are there) or create the records that are in the publisher (assume they are not there).
Take a look at what would be correct for your situation or leave a comment with more details on how you get your data in the subscriber originally. I'm not too familiar with aspnet_users and what that is. Transactional replication only helps if you don't want the data in the subscriber back at the publisher. If you want that you'll have to do merge replication.
I have two databases at two different places, both databases are same with table name as well as fields. now I want to synchronise both database. Is there any java code or we can achieve that directly from mysql or sql ? How ?
It sounds like you need to consider replication as an option.
I can do this if we have a main database (1st DB) and secondary one (2rd DB), and synchronise secondary DB when main DB changed.
when 1st DB change, you make a version number (which table change, at which row, column? and value?). the 2rd DB will check the version of it and update data for 2rd DB.
(you should reference the way that implement source control for complex situations)