Duplicate SQL Schema in SQL Server - sql-server

I have a requirement that a user can have multiple environments to make experiments and test how good their modifications are; after the users are satisfied with the modifications they've done to the data in the working environment, these modifications can be (partially or completely) copied to another environment; these environments can be created as empty or as copies of other environments; right now we are using SQL Azure and our current (not implemented) approach is creating each environment as a different SQL schema in the same database using the statement
CREATE SCHEMA
till now in POCs this is working really good for us. But what i don't like of this approach is that creating a new schema involves executing several scripts to create the tables and the SPs in the new schema, so as we create or update the default schema objects, we also need to update the scripts that create the schema, also that when the schema is created we need to bulk copy the data from the original schema using another script, so considering the size of the client's data this process sometimes it cannot be not as fast as I would like, and also maintaining the SQL code to create environments is not that good for the team.
so my question is, is there any way to duplicate an entire dbo schema with a different name using T-SQL Statements?, i know this can be done manually using SQL Server Management Studio and the generate scripts option, but this must happen automatically because the users can create a new environment at any time, i already checked the documentation for
ALTER SCHEMA TargetSchema
TRANSFER SourceSchema.TableName;
but this just changes the database object schema, it does not create an actual copy of the object.
EDIT:
I am not trying to create different databases for dev, qa and production, I already have them; what I want to achieve is create a web app with multiple environments, each environment is a sandbox for the final user to make experiments, imagine it is like creating a draft before making this data available for the general public, so when the users are satisfied with the modifications they can move this data to the public environment and then when it is moved, it is available for others to see it

You can use the CREATE DATABASE ... AS COPY OF transact-sql statement to create copies of your Production database that can be used as QA, testing and development databases.
CREATE DATABASE db_copy
AS COPY OF ozabzw7545.db_original ( SERVICE_OBJECTIVE = 'P2') ;
Here ozabzw7545 is the name of Azure SQL Database server.

The following is the full syntax + additional information that is specific to Azure SQL database: CREATE DATABASE (Azure SQL Database)
Additional Information for Copy an Azure SQL database
You also can use PowerShell:
New-AzureRmSqlDatabaseCopy -ResourceGroupName "myResourceGroup" `
-ServerName $sourceserver `
-DatabaseName "MySampleDatabase" `
-CopyResourceGroupName "myResourceGroup" `
-CopyServerName $targetserver `
-CopyDatabaseName "CopyOfMySampleDatabase"

Related

How to migrate the schema of a SQL Server database from on-premise to SQL Server on AWS RDS with non-supported features

I'm working on taking a on-premise server that works with SQL Server 2019 and migrating this to the cloud. The data right now is not the important thing, but rather the schema since this is a proof of concept. The main issue is that the on-premise server uses filestream to sometimes handle files. This will have to change in the future as refactoring and application updates take place.
The easiest way I thought would be to generate a schema .sql script from the old db and run that in the new environment, but this generated a TON of errors (25k).
Most of the errors include:
Failed permissions in database 'master'
Not finding certain objects in the new clean DB
Extended properties are not permitted on an object or it doesn't exist
Invalid data types
Database doesn't exist or permission not allowed
Filestream feature is disabled
So this probably won't work as a drop in solution to get the schema migrated to the new db. I've heard about AWS DMS (data migration service), but I don't know a lot about this. I'm asking, what tools could I look into to migrate over to RDS when RDS doesn't support features native to SQL Server?
One way to import schema is through the generated scripts wizard. You will have to manually tweak some things to make filestream and the local configuration of the sql server work nicely with aws RDS.
Generate and Publish Scripts Guide
Go to the source database
Right click the database in the menu on
the left (Object Explorer) Tasks>Generate Scripts
Select All tables,
procedures, etc.. except for filestream tables.
In the Scripts wizard pop up under Set Scripting Options, choose to make a .sql file, under advanced options, choose Schema Only. This will generate a script with only meta data for the tables and not the data in them
Generate the file.
Copy the .sql file over to the
EC2 instance (probably the Bastion Host) that is connected to the
RDS instance.
Open MS SQL Management Studio and right click on the
top most object in the Object Explorer and open a new query.
Copy and paste the code inside the .sql file into the query window.
Change the file path location of the data and log file to be
D:\rdsdbdata\DATA\TEST_AWS.mdf and D:\rdsdbdata\DATA\TEST_AWS_Log.ldf 
respectively. Any other file location will not be recognized by RDS
and will fail to create the table.
Comment or remove the lines of code that include:
a. ALTER DATABASE [TEST_AWS] SET TRUSTWORTHY OFF  
b. ALTER DATABASE [TEST_AWS] SET HONOR_BROKER_PRIORITY 
c. ALTER DATABASE [TEST_AWS] SET DB_CHAINING OFF Creating global users
d. FileStream
Execute the Script
Consider adding towards the top of the script DROP DATABASE [TEST_AWS] before the creation of the new database just in case you need to run the script multiple times to find the errors. This will save you from overwriting errors or having a unfinished table in memory.

Cleared SQL Server tables still retain some data

I made a custom application that is running from several years and is full of company data.
Now I need to replicate the application for another customer, so I set up a new server then i cloned the databases and empty all the tables.
Then I made a database and file shrink.
On the SQL Server side, the databases looks empty but if I run a grep search on the database files .mdf and .log I still can find recurrence of the previous company name also in system databases.
How do I really clean a SQL Server database?
Don't use backup/restore to clone a database for distribution to different clients. These commands copy data at the physical page/extent level, which may contain artifacts of deleted data, dropped objects, etc.
The best practice for this need is to create a new database with schema and system data from scratch using T-SQL scripts (ideally source controlled). If you don't already have these scripts, T-SQL scripts for schema/data can be generated from an existing database using the SMO API via .NET code or PowerShell. Here's the first answer I found with a search that uses the Microsoft.SqlServer.Management.SMO.Scripter class. Note you can include scripts data too (insert statements) by specifying the ScriptData scripting option for desired tables.

Create copy of a database only schema

I am relativley new to MS SQL server. I need to create a test database from exisitng test data base with same schema and get the data from production and fill the newly created empty database. For this I was using generate scripts in SSMS. But now I need to do it on regular basis in a job. Please guide me how I can create empty databases automatically at a point of time.
You will have a very hard time automating the generate scripts wizard. I would suggest using something like Red-Gate's SQL Compare (or any alternative that supports command-line). You can create a new, empty database, then script a compare/deploy using the command line from SQL Server Agent.
Another, more icky alternative, is to deploy your schema and modules to the model database. You can keep this in sync using SQL Compare (or alternatives), or just be diligent about deployment of schema/module changes, then when you create a new database it will automatically inherit the current state of your schema/modules. The problem with this approach (other than depending on you keeping model in sync) is that all new databases will inherit this schema, since there currently is no way to have multiple models.
Have you considered restoring backups?
To add to Aaron's already good answer, I've been using SQLDelta for years - I think it's excellent.
(I have no connection to SqlDelta, other than being a very satisfied customer)

SQL Server distributed databases

how to link two different database in same SQL Server instance
and send queries between them
use like below.
DB1.dbo.TableFromDB1
DB2.dbo.TableFromDB2
DB - database
Look at using synonyms "CREATE SYNONYM".
You can access the databases directly with a full path. But, that code will break if the database is ever renamed or changed.
Using a synonym, the code can remain unchanged; when the database moves, just update the synonym.
This can be useful when you have a test and production environment. The code does not have to change just because you move it from test to production and the database names do not have to be identical.

How can I copy a SQL Server 2005 database from production to development?

We have a production SQL Server 2005 database server with the production version of our application's database on it. I would like to be able to copy down the data contents of the production database to a development server for testing.
Several sites (and Microsoft's forums) suggest using the Backup/Restore options to copy databases from one server from another, but this solution is unworkable for several reasons (I don't have backup authority on our production database, I don't want to overwrite permissions on the development server, I don't want to overwrite structure changes on the development server, etc...)
I've tried using the SQL Import/Export Wizard in SQL Server 2005, but it always reports primary key violations. How can I copy the contents of a database from the production server to development without using the "Backup/Restore" method?
Well without the proper rights it really becomes more tedious and less than ideal.
One way that I would recommend though is to drop all of your constraints and indexes and then add them again once the data has been imported/exported.
Not an elegant solution but it'll process really fast.
EDIT:
Another option is to create an SSIS package where you specifically dump the tables in an order that won't violate the constraints.
I often use SQL Data Compare (http://www.red-gate.com/products/sql_data_compare/index.htm) for this task: the synchronization scripts it writes will remove the relationships during the transfer and reapply them, but that is OK in most development cases. It works especially well with smaller databases or subsets of databases.
If your database is large, I would recommend finding someone with the keys to the kingdom. Doing an out of sequence backup could mess with the ability to restore the database from the primary backup (if they are doing partials during the week for example) by marking records backed up when they are only in your backup, so don't try to bypass that security if you are unsure why it is there.
Assuming that you can connect to both DB's from the same machine (which almost always you can - I do it with my production servers via a VPN).
For each table
DELETE FROM devserv.dbo.tablename;
SET identity_insert [devserv.dbo.tablename] ON;
INSERT into devserv.dbo.tablename SELECT * from prodserv.dbo.tablename;
SET identity_insert [devname.dbo.tablename] OFF;
It is obviously worth noting that you will need to do this in a certain order if your tables have foreign key constraints.
The import/ export wizard is notorious for this sort of thing, and actually has a bug that makes it even less useful in working out the dependencies (sorry, don't have the details to hand).
SSIS does a much better job, but you'll have to add each table copy task by hand (in fact a datasource, copy task and data destination objects. It's a little tedious to set up (more than it should be), but a lot simpler than writing your own code.
One tip: avoid generating an SSIS project with the import/ export wizard, thinking it will be easier to just tweak it. It generates something that most people would find unrecognisable, even with some SSIS experience!
If you do not have backup permission on the production server, I guess this is because you are using a shared SQL Server from a webhoster. In this case, check if your webhoster provides the tool called myLittleBackup. It allows installing a db from one server to another in a few clicks...
I'd contact someone that does have access to backup the database. Permissions are usually there for a reason.
I might consider getting a backup as there will be one wether you run it or not (t least in theory a Prod DB is being backed up :) )
Then just restore to a brand new database on your dev box so you dont conflict with anything or anyone else.
If you restore to a new DB you could also pull the tables and data across manually if you wanted and since you create the DB you give yourself rights and it's all ok. There's a number of other methods, all tedious.
It is obviously worth noting that you will need to do this in a certain order if your tables have foreign key constraints.
We just use the SQL Server Database Publishing Wizard at work.
You would use this little utility to generate a T-SQL script that describes your production database (including all its data). Then connect to your dev server and run the generated script.
If you have to avoid backup/restore this is what I would recommend (these steps assuming you don't want to maintain the old schema NAME, just the structure) -
Download opendbdiff. Choose 'Compare' between source and (empty) destination. Choose sync. script tab and copy only the create table rows (without dbo.sysdiagrams tables etc.) paste into sql managment studio new query, delete all the schemas names appearing before the table names.
Now you have the full structure including primary keys, identity etc. Next step - use sql server import and export data like you did before (make sure you choose edit mappings and choose destination schema as dbo etc.). Also make sure you tick drop and recreate destination table.
On your Dev machine, setup a linked server to your production machine. Then just
INSERT dev.db.dbo.table (fieldlist)
SELECT (fieldlist) from prod.db.dbo.table

Resources