Tools for creating a "test" like environment on a development database - database

We have an Oracle Enterprise database in production and another instance that we use as both a QA and Development database. QA has requested that they be given separate schemas within the database so they can test applications isolated from changes made by developers. For an example, say the schema used in development and the one that will be used in production is called APP_OWNER and APP_OWNER could contain tables that have FK references to tables in other schemas, say in BASE_OWNER. The idea would be to create a QA_APP_OWNER schema and to pull over the production data into that schema as well as pulling any BASE_OWNER tables referenced into the QA_APP_OWNER schema as well. A simplified illustration would be:
Prod Setup:
----------------
BASE_OWNER.users
APP_OWNER.users (synonym to BASE_OWNER.users)
APP_OWNER.audit_users with FK to BASE_OWNER.users
QA Setup:
----------------
QA_APP_OWNER.users (copied data from prod)
QA_APP_OWNER.audit_users (FK to APP_OWNER.users)
This should be possible as we do not write code/SQL including schemas. (i.e we create schema based synonyms for tables outside the schema the application is running in)
My question is, are there good tools for easily creating such a QA_APP_OWNER schema? I'm aware of the FROMUSER TOUSER options of export, but If I remember correctly this will move an entire schema to another schema but it won't get me all the way to where I want to be b/c I need to change the references on the FKs. I'm unaware of a way short of exporting the DDL, manually changing it, and then importing the data manually. This is not an attractive option as many references are to tables that also reference other tables and the APP_OWNER schema has a plethora of tables itself. My fear is the more manual this is, the more likely-hood of a mistake that will allow something being tested to break when moved to the production environment. A nice solution would be to have licenses for both a dev and a qa instance of Oracle, but I have been told "it isn't in the budget" to do so.

Don't do it.. Setup separate QA and Development databases. What you are wanting just isn't worth the hassle.

Bit of a long shot, but will the impdp REMAP_SCHEMA option handle foreign keys in other schemas? I know there are some things it doesn't attempt to deal with but don't recall this scenario being mentioned - possibly just because it's unusual though.
Potentially you could do a single expdp of all the schemas, and an impdp remapping them all to QA_APP_OWNER in one go. Clearly this isn't something I've ever tried...

Related

Compare SQL Server DB schema & data (at the same time) and generate scripts

I've got a reasonably large / complicated DB which I need to upgrade in the field from version 1 to version 2. There's a lot of changes in schema and importantly data between the two.
Yes, I know this should have been version controlled alla:
http://www.codinghorror.com/blog/2008/02/get-your-database-under-version-control.html
but it wasn't - it will be when I am done.
So, current problem, I'm face with the choice of either going through all the commits or trying to diff between two versions of the db. So far I've tried:
http://opendbiff.codeplex.com/
http://www.sqldelta.com/
http://www.red-gate.com/
However none of them seem to be able to successfully generate schema upgrade scripts because they don't also do the data at the same time. This results in foreign key violations when adding new keys to tables as the table it references is new and while the schema for the table has been created, the data whcih it contains has not. Well it could be, but that requires me to use a different part of the tool and then mix together the two scripts.
I know this may look like a duplicate of:
What is best tool to compare two SQL Server databases (schema and data)?
which is where I found most of the existing tools I've tried, but so far I've not managed to get any of these to produce a working schema migration script (I'm really not too fussed about the data, but I do need the data which is required for foreign keys - which tbh is all the difference as I've deploy old version and new version).
Am I expecting too much?
Should I give up and start manually stitching together what I do have?
Or do I go through all the commits and manually create upgrade scripts?
I can't think of more powerful tools available than the ones you seem to have tried. If those fail, my homegrown versioning system probably won't help you much either.
However, you should be able to generate an update script and then manually edit it to add the data transformations to it.
And/or you could disable the foreign key constraints for the time that the update script runs.
There is no such thing as doing schema and data "at the same time". Even if you have them in one big script you would still be doing the schema first and then the data. If the schema script creates a new table and adds a constraint to it there is no reason you should get a referential integrity violation error as there are no rows in those tables.
In any case, you should give our xSQL Schema Compare and Data Compare tools a try, you will be impressed with the performance and the level of control you get.

Linking tables between databases

I’m after a bit of advice on the best way to go about this is SQL server 2008R2 express. I have a number of applications that are in separate databases on the same server. They are all “plugins” that use a central staff/structure list that will be in a separate database. The application is in the process of being migrated from JET.
What I’m looking for is the best way of all the “plugin” databases being able to see the central database and use those tables in standard queries and views etc.
As I’m using express that rules out any replication solution and so far the only option I can think of is to use triggers or a stored procedure to “push” out all the changes to the plugins. The information needs to be populated on a near enough real time basis however the number of changes will be very small maybe up to 100 a day and the biggest table only has about 1000 rows at the moment (the staff names table).
Hopefully that will cover all everything but if anyone needs any more details then just ask
Thanks
Apologies if I've misunderstood, but from your description it sounds like all these databases are hosted on the same instance of SQL Server - it's your mention of replication that makes me uncertain.
Assuming that's the case, you should be able to replace any copies of tables from the central database which are held in the "plugin" databases with views or synonyms which reference the central tables directly, since SQL server allows you to make references between databases on the same server using three-part naming (database_name.schema_name.object_name)
For example, if each plugin db has a table StaffNames, you could replace this with a view by dropping the table, then creating a view:
drop table StaffNames
go
create view StaffNames
as
select * from <centraldbname>.<schema - probably dbo>.StaffNames
go
and your code should continue to work seamlessly, as long as permissions are set up.
Alternatively, you could replace all the references to the shared tables in the plugin databases with three-part name references to the central database, but the view method requires less work.

Is there a good way to verify if a database schema is correct after an upgrade or migration?

We have customers who are upgrading from one database version to another (Oracle 9i to Oracle 10g or 11g to be specific). In one case, a customer exported the old database and imported it into the new one, but for some reason the indexes and constraints didn't get created. They may have done this on purpose to speed up the import process, but we're still looking into the reason why.
The real question is, is there a simple way that we can verify that the structure of the database is complete after the import? Is there some sort of checksum that we can do on the structure? We realize that we could do a bunch of queries to see if all the tables, indexes, aliases, views, sequences, etc. exist, but this would probably be difficult to write and maintain.
Update
Thanks for the answers suggesting commercial and/or GUI tools to use, but we really need something free that we could package with our product. It also has to be command line or script driven so our customers can run it in any environment (unix, linux, windows).
Presuming a single schema, something like this - dump USER_OBJECTS into a table before migration.
CREATE TABLE SAVED_USER_OBJECTS AS SELECT * FROM USER_OBJECTS
Then to validate after your migration
SELECT object_type, object_name FROM SAVED_USER_OBJECTS
MINUS
SELECT object_type, object_name FROM USER_OBJECTS
One issue is if you have intentionally dropped objects between versions you will also need to delete the from SAVED_USER_OBJECTS. Also this will not pick up if the wrong version of objects exist.
If you have multiple schemas, then the same thing is required for each schema OR use ALL_OBJECTS and extract/compare for the relevant user schemas.
You could also do a hash/checksum on object_type||object_name for the whole schema (save before/compare after) but the cost of calculation wouldn't be that different from comparing the two tables on indexes.
If you are willing to spend some, DBDiff is an efficient utility that does exactly what you need.
http://www.dkgas.com/oradbdiff.htm
In SQL DEVELOPER (the free Oracle utility) there is a Database Schema Differences feature.
It's worth to try it.
Hope it helps.
SQL Developer - download
Roni.
I wouldn't write the check script, I'd write a program to generate the check script from a particular version of the database. Just go though the metatdata and record what's there and write it to a file, then compare the values in that file against the values in the customer's database. This won't work so well if you use system-generated names for your constraints, but it is probably enough to just verify that things are there. Dropping indexes and constraints is pretty common when migrating a database, so you might not even need to check too much; if two or three things are missing, then it's not unreasonable to assume they all are. You might also want to write a script that drops all the constraints and indexes and re-creates them, and just have your customers run that as a post-migration step. Just be sure you drop everything by name, so you don't delete any custom indexes your customer might have created.

Grouping ETL Staging Tables With User Schemas?

I was thinking of putting staging tables and stored procedures that update those tables into their own schema. Such that when importing data from SomeTable to the datawarehouse, I would run a Initial.StageSomeTable procedure which would insert the data into the Initial.SomeTable table. This way all the procs and tables dealing with the Initial staging are grouped together. Then I'd have a Validation schema for that stage of the ETL, etc.
This seems cleaner than trying to uniquely name all these very similar tables, since each table will have multiple instances of itself throughout the staging process.
Question: Is using a user schema to group tables/procs/views together an appropriate use of user schemas in MS SQL Server? Or are user schemas supposed to be used for security, such as grouping permissions together for objects?
This is actually a recommended practice. Take a look at the Microsoft Business Intelligence ETL Design Practices from the Project Real. You will find (download doc from the first link) that they use quite a few schemata to group and identify objects in the warehouse.
In addition to dbo and etl, they also use admin, audit, part, olap and a few more.
I think it's appropriate enough, it doesn't really matter, you could use another database if you liked which is actually what we do.
I'm not sure why you would want a validation schema though, what are you going to do there?
Both the reasons you list (purpose/intent, security) are valid reasons to use schemas. Once you start using them, you should always specify schema when referencing an object (although I'm lazy and never specify dbo).
One trick we use is to have the same-named table in each of several schemas, combined with table partitioning (available in SQL 2005 and up). Load the data in first schema, then when it's validated "swap" the partition into dbo--after swapping the dbo partition into a "dumpster" schema copy of the table. Net Production downtime is measured in seconds, and it's all carefully wrapped in a declared transaction.

schema in sql server 2008

what is the difference between creating ordinary tables using 'dbo' and creating tables using schemas.How this schema works & supports the tables
A schema is just a container for DB objects - tables, views etc. It allows you to structure a very large database solution you might have. As a sample, have a look at the newer AdventureWorks sample databases - they have a number of schemata included, like "HumanResources" and so forth.
A schema can be a security boundary, e.g. you can give or deny certain users access to a schema as a whole. A schema can also be used to keep tables with the same name apart, e.g. you could create a "user schema" for each user of your application, and have a "Settings" table in each of them, holding that user's settings, e.g. "Bob.Settings", "Mary.Settings" etc.
In my experience, schemata are not used very often in SQL Server. It's a way to organize your database objects into containers, but unless you have a huge amount of database objects, it's probably something you won't really use much.
dbo is a schema.
See if this helps.
Schema seems to be a way of categorizing objects (tables/stored procs/views etc).
Think of it as a bucket to organize related objects based on functionality.
I am not sure, how logged in SQL user is tied to a specific schema though.

Resources