I'm working on a saas application which has an sql server database. I have tables, functions, and stored procedures in a particular schema (i.e. customer1.table1, customer1.spGetCustomers, etc.). I would like a way of copying the entire schema tables, functions, stored procedures, indexes, keys, etc. to a new empty schema for each new customer. I was hoping there was an fast easy way to do this so that I can add new schemas for every new customer and keep everything completely separate. I don't want a new database for each customer because of the cost and extra maintenance.
Please help.
Use SQLCMD variables in create scripts.
You will write schema independent create object scripts, something like:
CREATE TABLE [$(schema)].Table1 ...
CREATE PROCEDURE [$(schema)].Proc1...
Then you will execute it as:
sqlcmd -v schema ="Customer1" -i c:\script.sql
Related
I have a SQL Server database that uses schemas to logically group objects; there are ten schemas in the database.
If I baseline my database and create the schema history table in the “foo” schema, will Flyway apply a migration from the migration folder that operates on an object in the “bar” schema?
Do I need one folder of migration scripts for each schema in the database? The documentation explains how to specify schemas on the command line but doesn’t make it clear as to why I must.
The list of schemas on the command line has two effects:
the first named schema is where the history table goes
the named schemas are the ones that are cleaned by flyway clean
(Note - in 6.1 the command line defaultSchema parameter was introduced to separate these usages)
Migrations can refer to any schema in the database that you have access to - indeed, some objects may exist in one schema but depend on objects in another. If you're happy with the history table to go in dbo, and want to control the whole database with Flyway, just don't set these parameters. A folder of scripts per schema may help you with maintaining them but it is not necessary.
We have a legacy database that has dozens of schemas in it, and we're looking to split that database up into several smaller distinct databases instead.
Is there any way I can create a new database on the same physical server, and then transfer an entire schema over to the new database?
Our tables look like:
Foo.Table1
Foo.Table2
Foo.Table3
...
Bar.Table1
Bar.Table2
...
Xxx.Table1
Xxx.Table2
...
...and I want to move Foo.* to a new database.
Typically this is recommended to be done by some kind of per-table export/import, but that's quite cumbersome with the 150+ tables in the schema.
As far as my trivial research goes the options appear to be:
Export/import each table individually.
Backup the entire database, restore in a different destination and delete everything else (painful, since the entire database is ~900GB).
Deploy the dacpac of the single schema to the new database, and do a cross database initial seeding, aka:
INSERT INTO newDb.Foo.Table1 SELECT * FROM oldDb.Foo.Table1;
INSERT INTO newDb.Foo.Table2 SELECT * FROM oldDb.Foo.Table2;
INSERT INTO newDb.Foo.Table3 SELECT * FROM oldDb.Foo.Table3;
...
All of these options are a lot of effort... is there any other approach that will simply move an entire schema into a new database?
I am not aware of any fully automated way but this can be done relatively simply with the help of Excel.
In SSMS you can use "Object Explorer Details" to easily (with few mouse clicks) script schema of multiple tables.
With the help of system views (sys.tables, sys.columns etc.) and Excel you should be able to generate 'INSERT INTO .... SELECT ...' scripts for all of your tables in minutes.
In Excel (or a similar application) you paste the list of your tables (obtained using sys.tables) and then write a formula to generate a script for each table.
you can create a filegroup for each schema and move the tables of a schema into the related filegroup. after that you backup each filegroup and restore.
Ive run into the issue where I need to query 2 separate databases(same instance) in one query.
I am used to doing this with mysql, but Im not sure how to do it with DB2.
In mySQL it would be something like:
SELECT user_info.*, game.*
FROM user_info, second_db.game_stats as game
WHERE user_info.uid = game.uid
So the question is how i translate a query like that into DB2 syntax?
Equivalent of this
Is there a reason why you have the tables in a separate database? MySQL doesn't support the concept of schemas, because in MySQL a "schema" is the same thing as a "database". In DB2, a schema is simply a collection of named objects that lets you group them together.
In DB2, a single database is much closer to an entire MySQL server, as each DB2 database can have multiple schemas. With multiple schemas inside the same database, your query can run more or less unchanged from how it is written.
However, if you really have 2 separate DB2 databases (and, for some reason, don't want to migrate to a single database with multiple schemas): You can do this by defining a nickname in your first database.
This requires a somewhat convoluted process of defining a wrapper (CREATE WRAPPER), a server (CREATE SERVER), user mapping(s) (CREATE USER MAPPING) and finally the nickname (CREATE NICKNAME). It is generally easiest to do these tasks using the Control Center GUI because it will walk you through the process of defining each of these.
Due to an employee quitting, I've been given a project that is outside my area of expertise.
I have a product where each customer will have their own copy of a database. The UI for creating the database (licensing, basic info collection, etc) is being outsourced, so I was hoping to just have a single stored procedure they can call, providing a few parameters, and have the SP create the database. I have a script for creating the database, but I'm not sure the best way to actually execute the script.
From what I've found, this seems to be outside the scope of what a SP easily can do. Is there any sort of "best practice" for handling this sort of program flow?
Generally speaking, SQL scripts - both DML and DDL - are what you use for database creation and population. SQL Server has a command line interface called SQLCMD that these scripts can be run through - here's a link to the MSDN tutorial.
Assuming there's no customization to the tables or columns involved, you could get away with using either attach/reattach or backup/restore. These would require that a baseline database exist - no customer data. Then you use either of the methods mentioned to capture the database as-is. Backup/restore is preferrable because attach/reattach requires the database to be offline. But users need to be sync'd before they can access the database.
If you got the script to create database, it is easy for them to use it within their program. Do you have any specific pre-requisite to create the database & set permissions accordingly, you can wrap up all the scripts within 1 script file to execute.
I was thinking of putting staging tables and stored procedures that update those tables into their own schema. Such that when importing data from SomeTable to the datawarehouse, I would run a Initial.StageSomeTable procedure which would insert the data into the Initial.SomeTable table. This way all the procs and tables dealing with the Initial staging are grouped together. Then I'd have a Validation schema for that stage of the ETL, etc.
This seems cleaner than trying to uniquely name all these very similar tables, since each table will have multiple instances of itself throughout the staging process.
Question: Is using a user schema to group tables/procs/views together an appropriate use of user schemas in MS SQL Server? Or are user schemas supposed to be used for security, such as grouping permissions together for objects?
This is actually a recommended practice. Take a look at the Microsoft Business Intelligence ETL Design Practices from the Project Real. You will find (download doc from the first link) that they use quite a few schemata to group and identify objects in the warehouse.
In addition to dbo and etl, they also use admin, audit, part, olap and a few more.
I think it's appropriate enough, it doesn't really matter, you could use another database if you liked which is actually what we do.
I'm not sure why you would want a validation schema though, what are you going to do there?
Both the reasons you list (purpose/intent, security) are valid reasons to use schemas. Once you start using them, you should always specify schema when referencing an object (although I'm lazy and never specify dbo).
One trick we use is to have the same-named table in each of several schemas, combined with table partitioning (available in SQL 2005 and up). Load the data in first schema, then when it's validated "swap" the partition into dbo--after swapping the dbo partition into a "dumpster" schema copy of the table. Net Production downtime is measured in seconds, and it's all carefully wrapped in a declared transaction.