How to create a asynchronous replication of Oracle Database - database

I have access to an Oracle Database and plan on manipulating the data however, I don't want to affect the database itself. So I am wondering if it is possible to create a new database from the main one that can update itself with new data whenever the main database changes while not affecting the main database? (asynchronous replication?)
Not looking for exact code but key words or topics that I should read to find out more information. Sorry I just don't know where to really start.
Not looking to make a simple copy of an Oracle Database because that data will become stale as soon as new data is uploaded to the original database.
e.g. Database 1 has employee records. I want to clone this database and call it database 2 while also making sure that anytime database 1 changes, so does database 2 but not vice versa.

Related

What is the best way to update (or replace) an entire database collection on a live mongodb machine?

I'm given a data source monthly that I'm parsing and putting into a MongoDB database. Each month, some of the data will be updated and some new entries will be added to the existing collections. The source file is a few gigabytes big. Apart from these monthly updates, the data will not change at all.
Eventually, this database will be live and I want to prevent having any downtime during these monthly updates if possible. What is the best way to update my database without any downtime?
This question is basically exactly what I'm asking, but not for a MongoDB database. The accepted answer there is to upload a new version of the database and then rename the new database to use the old one's name.
However, according to this question, it is impossible to easily rename a MongoDB database. This renders that approach unusable.
Intuitively, I would try to iteratively 'upsert' the entire database using each document's unique 'gid' identifier (this is a property of the data, as opposed to the "_id" generated by MongoDB) as a filter, but this might be an inefficient way of doing things.
I'm running MongoDB version 4.2.1
Why do you think updating the data would mean downtime?
It sounds like you don't want your users to be able to access the new data mid-load.
If this is the case, a strategy could be to have 2 databases; a live and a staging; rather than renaming the staging database to live, you could just rename the connection string in the client application(s) that connect to it.
Also consider mongodump and mongorestore to copy databases; although these can be slower with larger databases.

Update the sqlite attached database based on any change in the main database in C

I have a sample application in C connected to the sqlite database. I have to replicate this DB (transparent replication) on the same server (on different disks). To explain my question better, what I need is:
Create one or two slave/backup databases which are synced with the main db. e.g., when I write in the main db, the backup db/dbs will be updated as well. Also, for read, if the main db crashed, the application should read from the other backup db.
I know sqlite does not support replication and I searched a lot and found some solutions but they are mostly for replication across multiple servers or for MySQL.
What came to my mind was creating a new DB and copy all data from the main db to the slave:
ATTACH '/root/litereplica/litereplica-master/sqlite3.6/fuel_transaction_1_backup.db' AS backup; //This attached a new empty db to the main db
CREATE TABLE backup.fuel_transaction_1 AS SELECT * FROM main.fuel_transaction_1;
Now, I need to synch this backup db with the main db. So, I have to update the slave based on the change on the main db. I know the update part should be s.th like this:
UPDATE backup.tableX SET columnX=(SELECT main.tableX.columnX FROM main.tableX WHERE main.tableX.id=backup.tableX.id)'
But I don't know how to tell this (in my C code), i.e., how to write if main db is changed then go and update the slave db..? for example, in PHP, you can check if a table is updated with if(mysql_affected_rows ($link)>0) {
}.. I want it's equivalent in C.
Thanks,

Migrate one MS SQL database to another

Could you please suggest the easiest way to programmatically (not via UI) generate a script to migrate specific tables (schema, constraints, indexes) to another database on a separate server.
I am aware of replication, SSIS, generate scripts feature, backup-restore approach and SQL Import/export window. However, all these approaches require at least some kind of UI interaction or don' allow to copy constraints or don't allow to migrate only part of data.
Database where I will be putting the data will be in sync with main DB, so it is possible to just wipe-off existing data in it and overwrite with schema and data from main DB.
From Comment: I need to migrate only part of DB: specific tables with their foreign key/primary key constraints, indexes AND data from these tables
as per my understanding i hope this will help you
Click Next
Choose your Location
USE DATABASE: FALSE will help you to execute script in your New DB which you created in your new server basically it will not generate Create DB script
Read carefully Table View/Option whatever you need please make it true
Click Next Pickup script file from your location and run on your new server

Database save, restore and copy

An application runs training sessions. Environment for each session (like "mission" or "level" in games) is stored in a database.
Before starting a session, user can choose which of many available databases to use.
During the session database may be modified.
After the session changed database is usually discarded, but sometimes may be saved under new or same name.
Databases are often copied between non-connected computers (on a flash card).
If environment were stored in plain files, it would be easy: copy, load, save.
We currently use similar approach: store databases as MS SQL backups, copy and save them as files, and load into actual DBMS when session starts. Main problem is modification: when database schema changes, all the backups must be updated, which is error-prone.
Storing everything in a single database with additional "environment id" relationship and providing utilities to load, save and copy environments seems too complex for the task.
What are other possible ways to design for that functionality? This problem is probably not unique and must have some though-out solution.
Firstly, I think you need to dispense with the idea of SQL Backups for this and shift to tables that record data changes.
Then you have a source database containing all your regular tables, plus another table that records a list of saved versions of it.
So table X might contain columns TestID, TestDesc, TestDesc2, etc
Then you might have a table that contains SavedDBID, SavedDBTitle,etc
Next, for each table X you have a table X_Changes. This has the same columns as table X, but also includes a SavedDBID column. This would be used to record any changed rows between the source database and the Saved one for a given SavedDBID.
When the user logs on, you create a clone of the source database. Then you use the Changes tables to make the clone's tables reflect the saved version. As the user updates the main tables in the clone, the changed rows should also be updated in the clone's Changes tables.
If the user decides to save their copy, use the Clone's changes tables to record the differences between the Source and the Clone in the original database, then discard the Clone.
I hope this is understandable. It will certainly make any schema changes easier to immediately reflect in the 'backups' as you'd only have one database schema to change. I think this is much more straightforward than using SQL Backups.
As for copying databases around using flash cards, you can give them a copy of the source database but only including info on the sessions they want.
As one possible solution - virtualise your SQL server. You can have multiple SQL servers if you want and you can clone and roll them back independently.

How to partially migrate a database to a new system over time?

We are in the process of a multi-year project where we're building a new system and a new database to eventually replace the old system and database. The users are using the new and old systems as we're changing them.
The problem we keep running into is when an object in one system is dependent on an object in the other system. We've been using views, but have run into a limitation with one of the technologies (Entity Framework) and are considering other options.
The other option we're looking at right now is replication. My boss isn't excited about the extra maintenance that would cause. So, what other options are there for getting dependent data into the database that needs it?
Update:
The technologies we're using are SQL Server 2008 and Entity Framework. Both databases are within the same sql server instance so linked servers shouldn't be necessary.
The limitation we're facing with Entity Framework is we can't seem to create the relationships between the table-based-entities and the view-based-entities. No relationship can exist in the database between a view and a table, as far as I know, so the edmx diagram can't infer it. And I cannot seem to create the relationship manually without getting errors. It thinks all columns in the view are keys.
If I leave it that way I get an error like this for each column in the view:
Association End key property [...] is
not mapped.
If I try to change the "Entity Key" property to false on the columns that are not the key I get this error:
All the key properties of the
EntitySet [...] must be mapped to all
the key properties [...] of table
viewName.
According to this forum post it sounds like a limitation of the Entity Framework.
Update #2
I should also mention the main limitation of the Entity Framework is that it only supports one database at a time. So we need the old data to appear to be in the new database for the Entity Framework to see it. We only need read access of the old system data in the new system.
You can use linked server queries to leave the data where it is, but connect to it from the other db.
Depending on how up-to-date the data in each db needs to be & if one data source can remain read-only you can:
Use the Database Copy Wizard to create an SSIS package
that you can run periodically as a SQL Agent Task
Use snapshot replication
Create a custom BCP in/out process
to get the data to the other db
Use transactional replication, which
can be near-realtime.
If data needs to be read-write in both database then you can use:
transactional replication with
update subscriptions
merge replication
As you go down the list the amount of work involved in maintaining the solution increases. Using linked server queries will work best if its the right fit for what you're trying to achieve.
EDIT: If they're the same server then as suggested by another user you should be able to access the table with servername.databasename.schema.tablename Looks like it's an entity-framework issues & not a db issue.
I don't know about EntityToSql but I know in LinqToSql you can connect to multiple databases/servers in one .dbml if you prefix the tables with:
ServerName.DatabaseName.SchemaName.TableName
MyServer.MyOldDatabase.dbo.Customers
I have been able to click on a table in the .dbml and copy and paste it into the .dbml of the alternate project prefix the name and set up the relationships and it works... like I said this was in LinqToSql, though have not tried it with EntityToSql. I would give it shot before you go though all the work of replication and such.
If Linq-to-Entities cannot cross DB's then Replication or something that emulates it is the only thing that will work.
For performance purposes you probably want either Merge replication or Transactional with queued (not immediate) updating.
Thanks for the responses. We're going to try adding triggers to the old database tables to insert/update/delete records in the new tables of the new database. This way we can continue to use Entity Framework and also do any data transformations we need.
Once the UI functions move over to the new system for a particular feature, we'll remove the table from the old database and add a view to the old database with the same name that points to the new database table for backwards compatibility.
One thing that I realized needs to happen before we can do this is we have to search all our code and sql for ##Identity and replace it with scope_identity() so the triggers don't mess up the Ids in the old system.

Resources