Only part of the data in the database is being processed by the application, the rest is necessary for reporting purposes, but it causes poor application performance. I would like to archive historical data without modifying database schema.
Is there a possibility to replicate database, delete old data from primary instance and regularly synchronise new changes into replicated database? That way primary "transactional" database will be lightweight and replicated database will contain full set of both current and historical data for reporting purposes.
Could you recommend some tools or give some tips to achieve that on Oracle?
edit:
I'm wondering if I could use streams and somehow make DML handler to ignore DELETE operations on rows (docs.oracle.com/cd/B28359_01/server.111/b28321/…) so that during data replication historical rows will be preserved despite being deleted from transactional db.
You don't need to create two separate databases. Just create one transactional database where you will save all your transactions and then create views based on these tables to show required data. In this way you just have to maintain only one database.
Related
I have a specific requirement in Transactional Replication, but I am not sure whether it is achievable or not. Could you please help me out if there is any possible way to achieve the same.
Requirement:
As per the requirement, there will be two databases. One is the publication database and another is subscription database.
I want to replicate some of the tables (articles) of the publication database to the subscription database. But what I want is to replicate data only. Because I want to keep those tables (replicating tables) to always present in the subscription database, they may be the empty table initially and when replication starts, these tables may get their data from publication database.
But I don't want the replication to create these tables for me in subscription database. I want to use already created tables. They will have the same schema as publication database tables.
When you configure a publication, you can set the properties for articles. One of the article properties is called Action if name in use. You can set that to the option Keep existing object unchanged.
I'm am currently developing one project of many to come which will be using its own database and also data from a central database.
Example:
the database "accountancy" with all accountancy package specific tables.
the database "personelladministration" with its specific tables
But we also use data which is general and will be used in all projects like "countries", "cities", ...
So we have put these tables in a separate database called "general"
We come from a db2 environment where we could create foreign keys between databases.
However, we are switching to MS SQL server where it is not possible to put foreign keys between databases.
I have seen that a workaround would be to use triggers, but I'm not convinced that is a clean solution.
Are we doing something wrong in our setup? Because it seems right to me to put tables with general data in a separate database instead of having a table "countries" in every database, that seams difficult to maintain and inefficiënt.
What could be a good approach to overcome this?
I would say that countries is not a terrible table to reproduce in multiple databases. I would rather duplicate static data like that than use more elaborate techniques. There is one physical schema per database in sql server and the schema can not be shared. That is why people use replication or triggers for shared data.
I can across this problem a while back. We have one database for authentication, however, those users have to be shared across multiple applications some of which have their own database.
Here is my question on this topic.
We resorted to replication and using an custom Authentication/Registration service agent to keep the data up to data.
Using views, in what Sourav_Agasti suggested in his answer, would be the most straight forward approach for static data. You can create views and indexed views and join data from databases on linked servers.
Create a loopback linked server and then create a view(if required, on each database) which accesses the table in this "central database" through this linked server. There will be a minor performance impact but it more than enough compensates by being very simiplistic.
I'm upgrading my SQL Server 2000 database to SQL Server 2008 R2. I want to make use of Change Data Capture feature. Im my existing application I have the similar functionality, but I'm using triggers and historical table with Hst_ prefix with almost similar schema as the original tables.
My question is: is there any way to migrate my data from Hst_ tables to the tables used by CDC feature?
I was thinking of doing that like this:
I have the table Cases.
I'm using my custom historization mechanism , so I also have also three triggers (on insert, update and delete) and a twin table Hst_Cases.
Now I'm enabling CDC on table Cases
CDC creates function, which returns historical data (fn_cdc_get_all_changes_dbo_Cases) and also a system table, which actually holds the data (cdc.dbo_Cases_CT).
I could insert data from Hst_Cases to cdc.dbo_Cases_CT, but I have the following problems:
I don't know how to get __$start_lsn and __$seqval.
It is difficult to figure out __$update_mask (I have to compare each two rows).
Is there the only way to do that? I want to avoid the situation then I join "new" historical data with the "old" historical data from Hst_ tables.
Thanks!
You typically don't want to use the capture tables to store long-term change data, it would be better to have an SSIS package move the capture data to permananent tables. If you do use them, I think if you ever have to restore your database, they'll be empty after restore unless you use the KEEP_CDC option when restoring. You'll also need to disable the job that automatically purges the capture tables.
If you create your own tables for storage, you can omit the lsn and mask fields.
I've been tasked with revisiting a database schema we designed and use internally for various ticketing and reporting systems. Currently there exists about 40 tables in one Oracle database schema supporting perhaps six webapps.
However, there's one unifying relationship amongst them all: a rooms table describing the room. Room name, purpose and other data are thrown into a shared table for each app. My initial idea was to pull each of these applications into a separate database, and perform joins between a given database and the room database. But I've discovered this solution prevents foreign key constraints in SQL Server 2005. It seems silly to duplicate one table for each app and keep those multiple copies synchronized.
Should I just leave everything in one large DB, or is there something else I can do separate the tables without losing FK constraints?
The only way to achieve built-in referential integrity is to have the table inside the database in which it is referenced. You might be able to achieve the equivalent of referential integrity using triggers but it would likely be deathly slow.
You might be able to use SQL Server replication, in it's "Transactional replication" mode/form. http://msdn.microsoft.com/en-us/library/ms151176.aspx
if all the apps truly use and depend on the rooms - then keep them all in one DB.
you can still set privilege on the tables properly, and manage the data sets in the non overlapping areas normally -
is there any task you imagine you will not be able to perform when things are together?
I was thinking of putting staging tables and stored procedures that update those tables into their own schema. Such that when importing data from SomeTable to the datawarehouse, I would run a Initial.StageSomeTable procedure which would insert the data into the Initial.SomeTable table. This way all the procs and tables dealing with the Initial staging are grouped together. Then I'd have a Validation schema for that stage of the ETL, etc.
This seems cleaner than trying to uniquely name all these very similar tables, since each table will have multiple instances of itself throughout the staging process.
Question: Is using a user schema to group tables/procs/views together an appropriate use of user schemas in MS SQL Server? Or are user schemas supposed to be used for security, such as grouping permissions together for objects?
This is actually a recommended practice. Take a look at the Microsoft Business Intelligence ETL Design Practices from the Project Real. You will find (download doc from the first link) that they use quite a few schemata to group and identify objects in the warehouse.
In addition to dbo and etl, they also use admin, audit, part, olap and a few more.
I think it's appropriate enough, it doesn't really matter, you could use another database if you liked which is actually what we do.
I'm not sure why you would want a validation schema though, what are you going to do there?
Both the reasons you list (purpose/intent, security) are valid reasons to use schemas. Once you start using them, you should always specify schema when referencing an object (although I'm lazy and never specify dbo).
One trick we use is to have the same-named table in each of several schemas, combined with table partitioning (available in SQL 2005 and up). Load the data in first schema, then when it's validated "swap" the partition into dbo--after swapping the dbo partition into a "dumpster" schema copy of the table. Net Production downtime is measured in seconds, and it's all carefully wrapped in a declared transaction.