In the system, there are couple oracle DB servers.
Lets say oracle Db1 is a primary server having one master table and rest of the oracle Db servers connect to this primary server using DB link.
So is there way to cache the value fetched from primary DB into target DB so that every time a DB link call is saved and value can be fetched from local oracle DB cache.
What are the various caching mechanisms available,if any along with its advantages & disadvantages?
Does this caching works seamlessly in Active- Passive node setup or any additional config settings/code is needed ?
When primary DB value changes, the consumer DBs to be notified of change so as to flush the data from cache. So any event driven mechanism possible.
Environment details - Oracle 11g Database Release1, Unix.
Would appreciate inputs with sample code snippet on "HowTo". Thank you
One way to allow an application to continue working when a database link is temporarily offline is to use a Materialized View (MV).
This does not work like a cache, however, as the MV would need to be refreshed manually on a schedule (e.g. once every 5 minutes). If the data on the remote database changes, the local database will not see the new results until the MV is refreshed.
For example:
create materialized view tablename_mv
refresh complete on demand
as select * from remotetablename#dblinkname;
Then refresh it periodically with:
begin
dbms_mview.refresh('TABLENAME_MV');
end;
Related
I am developing a flutter app that should also be able to run in offline mode. Because I am using flutter I want to also offer the use of the web version of my application. The application I want to build is data reliant therefore to make it work offline I need to have some kind of local database, but for the web version to work, I also need to store the data on a remote database so it can be accessed from the web. The problem that this proposes is how do I makes sure that the local and remote databases are always on the same page. If something is changed from the web it needs to also affect the local database on the device and if something is changed locally it also has to affect the remote database. I simply need a tip on how to generally achieve what I am looking for.
This can be easily achieved using Firebase Firestore which supports offline.
Or
If you plan to do this from scratch one way to do this is to keep a separate local database and execute every transaction on it. If online do the same transactions on the remote database. If offline keep a record of transactions pending on the local database ( preferably in a separate DB from the main DB) and execute them when the remote database is connected.
You can use Hive or sqflite for local DB
Or
Keep track of records, which the transactions were performed on. Before synchronize merge these records from both local(done offline) and remote(done on the web, phone not connected). If multiple transactions were performed on the same record, Update both remote & local DB records to the latest record state from DB where the latest transaction was performed (if the latest transaction was performed on the remote DB, update the local DB& vice-versa)
THE SETUP
Two Databases at different locations
Local Server(Oracle): Used for in-house data entry and
processing.
Live Server(Postgres): Used as the DB for a public website.
THE SCENARIO
Daily data insertion/updations/deletions are performed on the Local
DB through out the day.
Later after the end of the day the entire data of the current day is
pushed to the Live DB Server using CSV files and Sql merge.
This updates the Live DB server with the latest updations and new
data inserted.
THE PROBLEM
As the Live server is updated using running batch at the end of the day, the deletion operations do not get applied on the Live server.
Due to this unwanted data also remains at the Live Server causing discrepancy in the data on both servers.
How can the delete operation on local DB server be applied on Live Server along with the Updations and Insertions?
P.S. The entire Live DB is to be restructured so any solution that requires breaking down and restructuring the DB server can also be looked into.
Oracle GoldenGate supports replication from Oracle to PostgreSQL. It would certainly be faster and less error prone than your manual approach since it is all handled at a much lower level by the database.
If for some reason you don't want to do that then you are back to triggers tracking deletes in a table with the PK for the deleted records.
Or you could just switch out the PostgreSQL with Oracle :-)
I have a local SQL Server database that I copy large amounts of data from and into a remote SQL Server database. Local version is 2008 and remote version is 2012.
The remote DB has transactional replication set-up to one local DB and another remote DB. This all works perfectly.
I have created an SSIS package that empties the destination tables (the remote DB) and then uses a Data Flow object to add the data from the source. For flexibility, I have each table in it's own Sequence Container (this allows me to run one or many tables at a time). The data flow settings are set to Keep Identity.
Currently, prior to running the SSIS package, I drop the replication settings and then run the package. Once the package completes, I then re-create the replication settings and reinitialise the subscribers.
I do it this way (deleting the replication and then re-creating) for fear of overloading the server with replication commands. Although most tables are between 10s and 1000s of rows, a couple of them are in excess of 35 million.
Is there a recommended way of emptying and re-loading the data of a large replicated database?
I don't want to replicate my local DB to the remote DB as that would not always be appropriate and doing a back and restore of the local DB would also not work due to the nature of the more complex permissions, etc. on the remote DB.
It's not the end of the world to drop and re-create the replication settings each time as I have it all scripted. I'm just sure that there must be a recommended way of managing this...
Not doing it. Empty / Reload is bad. Try to update the table via merge - this way you can avoid the drop and recreate, which also will result in 2 replicated operations. Load the new data into temp tables on the other server (not replicated), then merge them into the replicated tables. If a lot of data is unchanged, this will seriously reduce the replication load.
I'd like to be able to do the following in a HTML5 (iPad) web app:
upload data to an online database (which would be probably <50Mb in size if I was to build the online database in something like SQLite)
extract either a subset or a full copy of data to an offline webdatabase
(travel out of 3G network coverage range)
perform a bunch of analytic-type calculations on the downloaded data
save parameters for my calculations to the offline webdatabase
repeat, saving different parameter sets for several different offline analytic-type calculation sessions over an extended period
(head back into areas with 3G network coverage)
sync the saved parameters from my offline webdatabase to the central, online database
I'm comfortable with every step up till the last one...
I'm trying to find information on whether it's possible to sync an offline webdatabase with a central database, but can't find anything covering the topic. Is it possible to do this? If so, could you please supply link/s to information on it, OR describe how it would work in enough detail to implement it for my specific app?
Thanks in advance
I haven't worked specifically with HTML5 local databases, but I have worked with mobile devices that require offline updates and resyncing to a central data store.
Whether the dataset is created on the server or on the offline client, I make sure its primary key is a UUID. I also make sure to timestamp the record each time its updated.
I also make not of when the last time the offline client was synced.
So, when resyncing to the central database, I first query the offline client for records that have changed since the last sync. I then query the central database to determine if any of those records have changed since the last sync.
If they haven't changed on the central database, I update them with the data from the offline client. If the records on the server have changed since last sync, I update them to the client.
If the UUID does not exist on the central server but does on the offline client, I insert it, and vice versa.
To purge records, I create a "purge" column, and when the sysnc query is run, I delete the record from each database (or mark it as inactive, depending on application requirements).
If both records have changed since last update, I have to either rely on user input to reconcile or a rule that specifies which record "wins".
I usually don't trust built-in database import functions, unless I'm importing into a completely empty database.
Steps:
Keep a list of changes on the local database.
When connected to remote database, check for any changes since last sync on remote.
If changes on remote side has conflicts with local changes, inform the user on what to do.
For all other changes, proceed with sync:
download all online changes which did not change locally.
upload all local changes which did not change remotely.
This method can actually work on any combination of databases, provided there is a data convertor on one side.
It looks to me, from a few sites I visited, that (as long as you are using SQLite for your Server db) it should be possible.
HTML5 webdatabases also use SQLite (although not all browsers support it and W3C seems to have dropped support for it)
so...
If you export the data using the .dump command and then import the data into the webdatabase using the $sqlite mydb.db < mydump.sql syntax you should be able to do this with some fidgeting with a php or java backend?
Then when you want to sync the "offline" data to your server just do the opposite dump from the webdatabase to a dump.sql file and import to the server database
This site explains exporting to and importing from SQLite dumps
SOURCE: dumping and restoring an SQLite DB
HTML5 supports browser db SQLite , I have tried in Mozilla and chrome, and it works fine.
I also have a requirement where I have some offline form, user punches the data and click on save, it saves in local browser db, and later when user syncs with the server or comes online, he can click on sync button which actually sync data from browser db any other datasource.
I have 2 databases, one on local server and one on a remote server.
I created a transactional replication publication on the local DB, which feeds the remote DB every minute with whatever updates it gets. So far, this is working perfectly.
However, the local DB needs to get cleaned (all its information deleted) daily. THIS is the part I'm having trouble with, I was expecting a replication mode that would only feed the server DB with the inserts, and make it ignore the part when the local DB gets cleaned. At the moment, the remote DB is also getting cleaned.
Would a different kind of replication help me achieve what I want, or is replication no longer the way to do it?
Have a look at this SO question here