Visual Studio Database Project Parameterise Database Name in Scripts - sql-server

I have two databases (a test and a live), which I have created in SSMS 2008 R2. I use a Visual Studio 2010 Database Project and a Schema Comparison to suck up the schema from the test environment and control changes etc; that's all working fine and I'm ready to publish the entire database out into the live location.
The problem I have is that although I was careful to give the test and live databases the same name, they reference tables in pre-existing test and live databases that do not have the same name. This means that a correct test Stored procedure might look like this:
SELECT * FROM TestDatabase.dbo.Customers
but the same procedure, when published to the live environment, needs to use the different name of the existing live database:
SELECT * FROM LiveDatabase.dbo.Customers
When it comes to publishing from the Visual Studio Database Project to the live environment, I could do a find and replace to all scripts to substitute database name with another, but I am reluctant to make changes to the scripts directly before publication; doing this will also mean that the Visual Studio scripts will no longer match the test environment scripts, which will confuse things and force check-outs where no "real" sense changes have been made, not to mention slight irregularities in syntax causing references to be missed.
So my question is this - is there some other way to change all the references in the scripts in the database project (for instance, by using a parameterised database name) in a fairly clean way?
I can easily re-write the stored procedures if necessary, as a one off, but I cannot change the names of the pre-existing databases.

Not sure if you have already found the answer to this. If not, please try adding a database reference to the database project call it 'varDatabase'. Make sure all your scripts are checked into TFS with this reference instead of hardocing the database name. You can call this reference using $ e.g., [$(varDatabase)].
During publish, you can pass the value to this variable as one off thing.

Related

TFS / SSDT Deployment in Multi Environment Scenario

This is the scenario I am currently experiencing.
In Development environment, developers usually make changes in the DEV SQL Server, then they will do a schema compare in Visual Studio 2013 / TFS, update the TFS then check the changes in.
Now, say in DEV, there are many stored procedures in a database that refer to a database called A, however in SIT environment this database is called B.
When I want to deploy these stored procedures from TFS to the SIT environment, is there a (automated) way to replace database A to database B, so that the stored procs do not break in SIT?
The workaround that I did was that I generated the publish script (via TFS > Publish > Generate Script), then copy and paste that script to SSMS, replace all reference to database A with database B.
However, this is quite manual (and not foolproof - have to be really careful what to replace), so I am wondering if there is a feature/capability to do this exercise in more efficient manner?
Thanks in advance.
Cheers
There is a functionality for that, but it might require some significant changes in your workflow.
You can use SQL Server database projects in SSDT to store database code. In this case, you can declare a project-level variable with the complementary database name, and then reference its objects in your SSDT project using SQLCMD syntax.
Or better yet, you can create projects for both databases and add the latter's DACPAC file as an external reference to the former. It will create a corresponding SQLCMD variable automatically and will make Intellisense available for linked database' objects.
During deployment, you can generate the publish script with changes and update only the value of this SQLCMD variable in the beginning of the publish script.
Of course, this approach requires that all changes in databases should be introduced in SSDT projects first and deployed to actual instances later. However, benefits far outweigh the additional hassle.
If you would like to continue using the connected workflow to edit your database, you may like to consider using synonyms in your database: instead of using a 3-part reference, which may change depending on the environment you're deploying to, you can put the variable part into the synonym and keep the contents of the stored procedure static.
In order to do so, you would need to start by creating a synonym for each table that you want to reference across database boundaries, e.g.
CREATE SYNONYM [dbo].[Syn_MyTable] FOR [$(OtherDb)].[dbo].[MyTable]
Then, in place of the table names, reference the synoynms in your procs e.g.:
CREATE PROCEDURE MyProc AS
SELECT ID FROM [Syn_MyTable]
Even with this method in place, the caveat is that you'll still be going against the grain if you try to follow this kind of workflow with SSDT, as SSDT is primarily a disconnected database editing tool. For example, if using schema compare to bring the changes into the project from your development environments, it's possible that SSDT will see changes to the synonyms if the destination database name differs. So special care is needed to ensure that the variable syntax within your project's synonym definition is not overwritten.
An alternate approach to database development is supported by a product we make at Redgate called ReadyRoll. ReadyRoll is a project sub-type of SSDT which actually favours a connected editing workflow. For example, with ReadyRoll, if you use a synonym, then it will ignore any differences in the database reference when importing changes into your project (as these are treated as variable by nature).
You can read more about how synonyms work in ReadyRoll, including a sample project, on this forum post:
https://forums.red-gate.com/viewtopic.php?f=199&t=79564&sid=314391978c186c19e50d9d69f266a700

How to develop t-sql in Visual Studio?

We are using Visual Studio 2013 with SSDT mainly for versioning t-sql code, so the sql is being developed on the dev server and then we use schema compare to transfer the scripts into visual studio (and check into Git). Before deployment (which we currently do with schema compare, too) we have to replace database and server references (with [$(database)] etc.). If we change the code in the dev server and compare again, such SQLCMD variables are lost again. (I would expect schema compare to be smart enough to retain the SQLCMD variables but I found no way to accomplish this).
The logical step is to develop sql in visual studio from the start. But so far, it has been hard to convince anybody in the team to do that. One can write sql and execute it in VS, no problem. One can also switch to SQLCMD mode and execute, all right. But when you create e.g. a view in VS, you must write down a create statement and of course this can be executed once but will yield an error when altering the view and executing the create statement again.
So my question is if anybody has some essential tips on how to do database development exclusively in Visual Studio. We were able to get the database references and all that straight, but not the development process.
I've been streamlining local database development and deployment using Visual Studio database projects for a few years now. Here are some tips.
In general...
Use local db instances: Each developer should have their own database instance installed locally. All scripts (tables, views, stored procs, etc.) should be developed in Visual Studio. Create a publish profile for deploying the project to the local db instance.
Use Publish feature: Confusingly Visual Studio provides both a Deploy and a Publish option which ultimately do the same thing. I recommend using just Publish because it's more prominent in the UI and you can create profiles to configure the deployment process for various database instances.
Keep local db up to date: When a developer makes changes in the database project and checks them in to source control then the other developers should check out these changes and republish the project to their local databases.
Create vs. Alter statements
All of your statements should be Create statements. There is no need for Alter statements or existence checks. Everything should be scripted as if you are creating the database objects for the first time. When you deploy or publish, VS will know whether to issue Alter statements for existing objects.
Data
Some ideas:
Script your data as a series of Insert statements. Include them in a post-deployment script in the database project. But this can be tedious and error-prone.
Keep a database backup that includes all of your test data. When setting up a development environment for the first time, create the database from the backup. After you make significant changes to the data, create a new backup and have your devs recreate their databases from the backup. In most cases it's ok if the backup is out of sync with the schema defined in the project -- simply republish the project (make sure to turn off the "Re-create database" setting so that only the differences are published and thus the data is not lost).
There may be 3rd party tools to do this in which case they are worth looking in to.
Create your own solution for deploying data. Mine involved the following and worked really nicely (but required a lot of time and effort!):
All data stored in XML files - 1 file per table - whose structure resembled the table
An executable to read the XML files and generate SQL merge (or insert/update) statements for each row of data and save them to a SQL script
A pre-build event in the database project to run the executable and copy the resulting SQL script to a post-deployment script in the project
Publish the project and the data will be pushed during post-deployment
Test/Production Deployments
Publish feature: You can create publish profiles for your test and production environments. However it will include your pre- and post-deployment scripts, and you won't get the versatility that the other options provide.
dacpacs: Ed Elliott covered them in his answer. Advantages: no need for Visual Studio to deploy, they can be deployed via SQL Management Studio or the command line with sqlpackage.exe, they can be easier to work with than a T-SQL deployment script.
Schema Compare: Schema compare may be good if you can use Visual Studio for your deployments and you like to double check all of the changes being deployed. You can also selectively ignore changes which is useful when you aren't lucky enough to have a development environment that completely mirrors production.
An age-old challenge. We've tried to use the data projects as they were defined through the years, but ran into several problems, including the fact that it seemed that these projects changed with every release of Visual Studio.
Now, we use the data project only to integrate with TFS for work item management and source code control. The way we do it so that we can build sprocs/views in Visual Studio is we write each script using the drop/create pattern. Our scripts also contain security (we made the mistake of using the default schema... if I could go back in time we'd segregate schemas and do schema-based role level security).
For table schema, we do schema compares to/from a versioned template database.
A typical stored proc looks like this:
IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[sp_MyStoredProcedure]') AND type in (N'P', N'PC'))
DROP PROCEDURE [dbo].[sp_MyStoredProcedure]
GO
CREATE PROCEDURE [dbo].[sp_MyStoredProcedure]
#MyParameter int
AS
BEGIN
-- Stored Procedure Guts
select 1
END
Good luck... ultimately, it just has to work for your team.
We are currently on the way to move from SSMT to SSDT. I see that we all facing the same problems and it is very strange that there is no good tutorial on the net (at least I haven't found it yet).
First of all about the variables. I think that you need to update to the newest version of SSDT (20015.02) + DacFx. We are using it and we do not have any problems with variables. It also has some new very good features as do not drop some objects on the target if they do not exist in the source.
However we came to solution to use synonyms for all cross database and linked server objects. For example we have table in the AnotherDatabase.dbo.NewTable. We create synonym [dbo].[syn_AnotherDatabase_dbo_NewTable] FOR [$(AnotherDatabase)].[dbo].[NewTable] and use it in the code instead of referencing the other databases. The same with linked servers: CREATE SYNONYM [syn_LinkedDatabase_dbo_NewTable] FOR [$(LinkedServer)].[$(LinkedDatabase)].[dbo].[NewTable].
Now about the development process. We set debug to our dev database in the project properties (later we are going to have separate databases for each developer). Then when you are modifying stored procedures/views/functions/etc... You open the script, change the CREATE to alter and you can work in the same way as you were doing in the SSMT. You can modify the body, execute it, execute queries in that window. However when you finish, you change it back from ALTER to CREATE and save the file.
The problem here is with the objects that does not support ALTER statement. In that case, you need to publish the code first. But in practice you are doing it so not so frequently so I believe that it is not so big deal.
SSDT is mature enough to use it to create your scripts and deploy your changes but you should move away from using the schema compare to doing deployments using sqlpackage.exe
The process looks something like:
-write code in vs/ssdt
-build project which results in a dacpac (either on your machine or ci server)
-deploy dacpac to db instance, using variables if you need to, to bring db's up to date. Use sqlpackage.exe to deploy or generate scripts which can be deployed manually
It should be pretty straight forward, but please ask if you are not sure on anything!
Ed

Does SSDT bulid script for only changed objects?

I'm currently in the process of redesigning our department's source control strategy using Team Foundation Server (TFS) in regard to database objects. Essentially, we store nothing in TFS at this time. I have discovered SSDT and really enjoy their integration within Visual Studio and think it will make our transition into TFS much easier.
So, Does SSDT have the capability of generating scripts based on the delta's of my SSDT project verses what is in our server? It seems from what I have researched, I will only be able to generate an entire database script.
Requirements (Mind you, our developers do not have ddl access to production):
I cannot drop a database to re-create it
I cannot drop ALL objects like all stored procs to re-create them but only what I need
Tables will need to be altered not dropped and only what has changed
Dacpac's are out of the question
Our best option based on our environment at this time is to use scripts for updates
Our database environment is currently SQL Server 2008 R2. My SSDT version is the latest 2013 that was published in June.
Yes, if you do a publish from the project you will pretty much meet all of these requirements, though dacpacs are built as part of the process. The schema compare and pre/post deploy scripts are stored in the dacpac and the publish reads what "should" be present against what is currently in the database. It then generates a change script of all necessary changes to bring the database in line with the project.
Make sure you use the Refactor/Rename when renaming objects - that will cut down on the table drop/recreate operations. You may want to be careful with the "Drop objects not in the project" options. If you haven't been careful with making sure all objects created in your production server are in your project, you could accidentally drop something important just because someone didn't get it checked in.
There are command lines to the SQLPackage command that can generate change detail reports and scripts that you can use. The scripts need to be run through SQLCMD or in SQLCMD mode, but you can definitely produce scripts pretty easily.

Recreate database from RedGate checked-in scripts

We've got a SQL Server instance with some 15-20 databases, which we check in TFS with the help of RedGate. I'm working on a script to be able to replicate the instance (so a developer could run a local instance when needed, for example) with the help of these scripts. What I'm worried about is the dependencies between these scripts.
In TFS, RedGate has created these folders with .sql files for each database:
Functions
Security
Stored Procedures
Tables
Triggers
Types
Views
I did a quick test with Powershell, just looping over these folders to execute the sql, but I think that might not always work. Is there a strict ordering which I can follow? Or is there some simpler way to do this? To clarify, I want to be able to start with an completly empty SQL Server instance, and end up with a fully configured one according to what is in the TFS (without data, but that is ok). Using Powershell is not a requirement, so if it is simpler to do some other way, that is preferrable.
If you're already using RedGate they have a ton of articles on how to move changes from source control to database. Here's one which describes moving database code from TFS using sqcompare command-line:
http://www.codeproject.com/Articles/168595/Continuous-Integration-for-Database-Development
If you compare to any empty database it will create the script you are looking for.
The only reliable way to deploy the database from scripts folders would be to use Red Gate SQL Compare. If you run the .sql files using PowerShell, the objects may not be created in the right order. Even if you run them in an order that makes sense (functions, then tables, then views...), you still may have dependency issues.
SQL Compare reads all of the scripts and uses them to construct a "virtual" database in memory, then it calculates a dependency matrix for it so when the deployment script is created, things are done in the correct order. That will prevent SQL Server from throwing dependency-related errors.
If you are using Visual Studio with the database option it includes a Schema Compare that will allow you to compare what is in the database project in TFS to the local instance. It will create a script for you to have those objects created in the local instance. I have not tried doing this for a complete instance.
You might have to at most create the databases in the local instance and then let Visual Studio see that the tables and other objects are not there.
You could also just take the last backup of each database and let the developer restore them to their local instance. However this can vary on each environment depending on security policy and what type of data is in the database.
I tend to just use PowerShell to build the scripts for me. I have more control over what is scripted out when so when I rerun the scripts on the local instance I can do it in the order it needs to be done in. May take a little more time but I get better functioning scripts for me to work with, and PS is just my preference. There are some good scripts already written in the SQL Community that can help you on this. Jen McCown did a blog post of all the post her husband has written for doing just this, right here.
I've blogged about how to build a database from a set of .sql files using the SQL Compare command line.
http://geekswithblogs.net/SQLDev/archive/2012/04/16/how-to-build-a-database-from-source-control.aspx
The post is more from the point of view of setting up continuous integration, but the principles are the same.

How can I set my deployment options to script the incremental release of a Visual Studio 2010 database project?

I've just started using a VS2010 database project to manage the release of an update to an existing database.
I want the deployment option to generate a script that will contain the commands to change my existing database rather than create an entirely new one.
E.g I have 10 existing tables - one of which I drop in the new version and I create some new sprocs. I only want the deploy to script the Drop table and Create Procedure commands.
I am using VS2010 Premium.
Is there a recommended standard approach I could follow to managing DBs in a project from initial creation to incremental releases?
Thanks!
There is an "Always re-create database" in the project's .sqldeployment file. Unchecking this option will result in an auto-generated SQL script that will incrementally update your database without dropping it first.
There is also an option to "Generate DROP statements for objects that are in the target databse but that are not in the database project." You will need to check this option, if you want tables, stored procs, etc. to get dropped if you've deleted them in the database project. This will delete any table, etc. that users may have created on their own for testing, debugging, etc.
To change the options in the .sqldeployment file. Open the file in Visual Studio. Either expand the database project in the solution explorer, the double click on the .sqldeployment file (it will probably be in the Properties folder under the DB project). Or open the properties page for the database project and click the "Edit..." button next to the "Deployment configuration file". Check or uncheck the options you want when the database deploys.
I use VSDBCMD.exe for 1-click build & deploy scripts I've created. It works very well. VSDBCMD uses a .sqldeployment file -- the default .sqldeployment file is specified in the .deploymanifest file, but it can be overridden by specifying it as a parameter when executing VSDBCMD. Also, I believe that Visual Studio uses VSDBCMD under the covers when
it deploys the database project, but I just assume that to be the case since the functionality is pretty much identical.
I asked a similar question a while back on the MSDN Forums and was told that the recommended way to do this is to use VSDBCMD. Basically, you output a schema file from your database project which contains all of the information about your database, and then you run VSDBCMD to compare your schema to the target database. This in turn creates the script to update the target database to your current schema.
The rationale for this approach is that just because you and I may think we know what the target database's schema looks like we can't really be sure until we let VSDBCMD run the comparison. Who knows, someone else may have modified the schema in the target database without our knowledge, so our change script may end up failing for some unknown reason.
I really wasn't terribly satisfied with this approach and ended up continuing to use my "old approach" of hand-coding my change scripts when necessary, but I am eager to see if anything has changed in 2010 that makes this a bit easier to work with. I'd really like to see a simple API that does what VSDBCMD does so I can put a GUI together to simplify updating a target (in my case, client) database without the person running the upgrade having to be a DBA.

Resources