We have some tables automatically clustered in database D and schema PUBLIC. We are going to rename the database and the schema with the commands :
ALTER DATABASE D RENAME TO D_RENAMED
ALTER SCHEMA PUBLIC RENAME TO PRODUCTION
We would like to know please if there is something else we have to do for this operation and if there is some side effect after this operation especially in term of cost for the automatic clustering. Thanks
A rename operation is simply a metadata change. There is no impact in terms of costs in renaming a schema.
The only impact will be on SQL queries that were explicitly using the old schema name will break. You may want to double-check that permissions on the schema are what you expect too.
Rather than renaming the public schema why not just clone it to a schema named production instead? That way you get to keep the public schema, which is always created with a new database. Even this will have no impact on costs despite there being two schemas with the same data (unless you make a bunch of modifications in both due to time travel/failsafe storage). see cloning documentation
Related
I have a SQL Server database that uses schemas to logically group objects; there are ten schemas in the database.
If I baseline my database and create the schema history table in the “foo” schema, will Flyway apply a migration from the migration folder that operates on an object in the “bar” schema?
Do I need one folder of migration scripts for each schema in the database? The documentation explains how to specify schemas on the command line but doesn’t make it clear as to why I must.
The list of schemas on the command line has two effects:
the first named schema is where the history table goes
the named schemas are the ones that are cleaned by flyway clean
(Note - in 6.1 the command line defaultSchema parameter was introduced to separate these usages)
Migrations can refer to any schema in the database that you have access to - indeed, some objects may exist in one schema but depend on objects in another. If you're happy with the history table to go in dbo, and want to control the whole database with Flyway, just don't set these parameters. A folder of scripts per schema may help you with maintaining them but it is not necessary.
We have a large >40Gb filestream enabled db in Production. I would like to automatically make a backup of this db and restore it to staging, for testing deployments. The nature of our environment is such that the filestream data is > 90% of the data and I don't need it in staging.
Is there a way that I can make a backup of the db without the filestream data, as this would drastically reduce my staging disk and network requirements, while still enabling me to test a (somewhat) representative sample of prod?
I am assuming you have a fairly recent version of SQL Server. Since this is production, I am assuming you are in full recovery model.
You can’t just exclude individual tables from a backup. Backup and restore do not work like that. The only possibility i can think is to do a backup of just the file groups that do not contain the filestream. I am not 100% sure if you will be able to restore it though since I have never tried it. Spend some time researching partial backups and restoring a file group and give it a try.
You can use Generate Scripts and interface and do one of the following:
copy all SQL objects and the data (without the filestream tables) and recreate the database
copy all SQL objects without the data; create the objects in new database on the current SQL instance; copy the data that you need directly from the first database;
The first is lazy and probably will not work well with big database. The second will work for sure, but you need to sync the data by your own.
In both cases, open this interface:
Then choose all objects and all tables without the big ones:
From this option you can control the data extraction (skip or include):
I guess it will be best to script all the objects without the data. Then create a model database. You can even add some sample data in your model database. When you are changing the the production database (create new object, delete object, etc), apply these changes on your model database, too. Having such model database means you are having a copy of your production database with all supported functionalities and you can restore a this model database on every test SQL instance you want.
We have an Oracle Enterprise database in production and another instance that we use as both a QA and Development database. QA has requested that they be given separate schemas within the database so they can test applications isolated from changes made by developers. For an example, say the schema used in development and the one that will be used in production is called APP_OWNER and APP_OWNER could contain tables that have FK references to tables in other schemas, say in BASE_OWNER. The idea would be to create a QA_APP_OWNER schema and to pull over the production data into that schema as well as pulling any BASE_OWNER tables referenced into the QA_APP_OWNER schema as well. A simplified illustration would be:
Prod Setup:
----------------
BASE_OWNER.users
APP_OWNER.users (synonym to BASE_OWNER.users)
APP_OWNER.audit_users with FK to BASE_OWNER.users
QA Setup:
----------------
QA_APP_OWNER.users (copied data from prod)
QA_APP_OWNER.audit_users (FK to APP_OWNER.users)
This should be possible as we do not write code/SQL including schemas. (i.e we create schema based synonyms for tables outside the schema the application is running in)
My question is, are there good tools for easily creating such a QA_APP_OWNER schema? I'm aware of the FROMUSER TOUSER options of export, but If I remember correctly this will move an entire schema to another schema but it won't get me all the way to where I want to be b/c I need to change the references on the FKs. I'm unaware of a way short of exporting the DDL, manually changing it, and then importing the data manually. This is not an attractive option as many references are to tables that also reference other tables and the APP_OWNER schema has a plethora of tables itself. My fear is the more manual this is, the more likely-hood of a mistake that will allow something being tested to break when moved to the production environment. A nice solution would be to have licenses for both a dev and a qa instance of Oracle, but I have been told "it isn't in the budget" to do so.
Don't do it.. Setup separate QA and Development databases. What you are wanting just isn't worth the hassle.
Bit of a long shot, but will the impdp REMAP_SCHEMA option handle foreign keys in other schemas? I know there are some things it doesn't attempt to deal with but don't recall this scenario being mentioned - possibly just because it's unusual though.
Potentially you could do a single expdp of all the schemas, and an impdp remapping them all to QA_APP_OWNER in one go. Clearly this isn't something I've ever tried...
I was thinking of putting staging tables and stored procedures that update those tables into their own schema. Such that when importing data from SomeTable to the datawarehouse, I would run a Initial.StageSomeTable procedure which would insert the data into the Initial.SomeTable table. This way all the procs and tables dealing with the Initial staging are grouped together. Then I'd have a Validation schema for that stage of the ETL, etc.
This seems cleaner than trying to uniquely name all these very similar tables, since each table will have multiple instances of itself throughout the staging process.
Question: Is using a user schema to group tables/procs/views together an appropriate use of user schemas in MS SQL Server? Or are user schemas supposed to be used for security, such as grouping permissions together for objects?
This is actually a recommended practice. Take a look at the Microsoft Business Intelligence ETL Design Practices from the Project Real. You will find (download doc from the first link) that they use quite a few schemata to group and identify objects in the warehouse.
In addition to dbo and etl, they also use admin, audit, part, olap and a few more.
I think it's appropriate enough, it doesn't really matter, you could use another database if you liked which is actually what we do.
I'm not sure why you would want a validation schema though, what are you going to do there?
Both the reasons you list (purpose/intent, security) are valid reasons to use schemas. Once you start using them, you should always specify schema when referencing an object (although I'm lazy and never specify dbo).
One trick we use is to have the same-named table in each of several schemas, combined with table partitioning (available in SQL 2005 and up). Load the data in first schema, then when it's validated "swap" the partition into dbo--after swapping the dbo partition into a "dumpster" schema copy of the table. Net Production downtime is measured in seconds, and it's all carefully wrapped in a declared transaction.
I have a database in which the main table has a really badly named primary key field which I want to rename. This primary key field is referenced by about 20 other tables and about 15 stored procedures.
What is the easiest way to rename this field everywhere it is referenced throughout the database?
Database refactoring tools exist to perform exactly the kind of operation you require. Just google for 'database refactoring tools' and pick something that will work with your particular brand of database. DB Deploy is an example of such a tool http://dbdeploy.com/ .
If for some reason you wanted to do this manually and you werent dealing with a huge database, I would probably make a text export of the database (ddl and data) and then get busy with find & replace.
edit: Additionally Redgate's (http://www.red-gate.com/) software is very highly rated but costs money. Personally I think their products are excellent and worth every cent considering the time they can save.
If it was me, I'd do it manaully using Managment Studio; Select the database, and right click to get Tasks-->Generate Scripts and select all objects in the database to export the DDL into a new query window, or editor of your choice, and the use your 'find' command to find each instance where the key exists, and then make the corresponding change using management studio directly to the database.
Make sure you backup you have a backup database just in case.