TFS / SSDT Deployment in Multi Environment Scenario - sql-server

This is the scenario I am currently experiencing.
In Development environment, developers usually make changes in the DEV SQL Server, then they will do a schema compare in Visual Studio 2013 / TFS, update the TFS then check the changes in.
Now, say in DEV, there are many stored procedures in a database that refer to a database called A, however in SIT environment this database is called B.
When I want to deploy these stored procedures from TFS to the SIT environment, is there a (automated) way to replace database A to database B, so that the stored procs do not break in SIT?
The workaround that I did was that I generated the publish script (via TFS > Publish > Generate Script), then copy and paste that script to SSMS, replace all reference to database A with database B.
However, this is quite manual (and not foolproof - have to be really careful what to replace), so I am wondering if there is a feature/capability to do this exercise in more efficient manner?
Thanks in advance.
Cheers

There is a functionality for that, but it might require some significant changes in your workflow.
You can use SQL Server database projects in SSDT to store database code. In this case, you can declare a project-level variable with the complementary database name, and then reference its objects in your SSDT project using SQLCMD syntax.
Or better yet, you can create projects for both databases and add the latter's DACPAC file as an external reference to the former. It will create a corresponding SQLCMD variable automatically and will make Intellisense available for linked database' objects.
During deployment, you can generate the publish script with changes and update only the value of this SQLCMD variable in the beginning of the publish script.
Of course, this approach requires that all changes in databases should be introduced in SSDT projects first and deployed to actual instances later. However, benefits far outweigh the additional hassle.

If you would like to continue using the connected workflow to edit your database, you may like to consider using synonyms in your database: instead of using a 3-part reference, which may change depending on the environment you're deploying to, you can put the variable part into the synonym and keep the contents of the stored procedure static.
In order to do so, you would need to start by creating a synonym for each table that you want to reference across database boundaries, e.g.
CREATE SYNONYM [dbo].[Syn_MyTable] FOR [$(OtherDb)].[dbo].[MyTable]
Then, in place of the table names, reference the synoynms in your procs e.g.:
CREATE PROCEDURE MyProc AS
SELECT ID FROM [Syn_MyTable]
Even with this method in place, the caveat is that you'll still be going against the grain if you try to follow this kind of workflow with SSDT, as SSDT is primarily a disconnected database editing tool. For example, if using schema compare to bring the changes into the project from your development environments, it's possible that SSDT will see changes to the synonyms if the destination database name differs. So special care is needed to ensure that the variable syntax within your project's synonym definition is not overwritten.
An alternate approach to database development is supported by a product we make at Redgate called ReadyRoll. ReadyRoll is a project sub-type of SSDT which actually favours a connected editing workflow. For example, with ReadyRoll, if you use a synonym, then it will ignore any differences in the database reference when importing changes into your project (as these are treated as variable by nature).
You can read more about how synonyms work in ReadyRoll, including a sample project, on this forum post:
https://forums.red-gate.com/viewtopic.php?f=199&t=79564&sid=314391978c186c19e50d9d69f266a700

Related

Visual Studio Database Project Parameterise Database Name in Scripts

I have two databases (a test and a live), which I have created in SSMS 2008 R2. I use a Visual Studio 2010 Database Project and a Schema Comparison to suck up the schema from the test environment and control changes etc; that's all working fine and I'm ready to publish the entire database out into the live location.
The problem I have is that although I was careful to give the test and live databases the same name, they reference tables in pre-existing test and live databases that do not have the same name. This means that a correct test Stored procedure might look like this:
SELECT * FROM TestDatabase.dbo.Customers
but the same procedure, when published to the live environment, needs to use the different name of the existing live database:
SELECT * FROM LiveDatabase.dbo.Customers
When it comes to publishing from the Visual Studio Database Project to the live environment, I could do a find and replace to all scripts to substitute database name with another, but I am reluctant to make changes to the scripts directly before publication; doing this will also mean that the Visual Studio scripts will no longer match the test environment scripts, which will confuse things and force check-outs where no "real" sense changes have been made, not to mention slight irregularities in syntax causing references to be missed.
So my question is this - is there some other way to change all the references in the scripts in the database project (for instance, by using a parameterised database name) in a fairly clean way?
I can easily re-write the stored procedures if necessary, as a one off, but I cannot change the names of the pre-existing databases.
Not sure if you have already found the answer to this. If not, please try adding a database reference to the database project call it 'varDatabase'. Make sure all your scripts are checked into TFS with this reference instead of hardocing the database name. You can call this reference using $ e.g., [$(varDatabase)].
During publish, you can pass the value to this variable as one off thing.

How to develop t-sql in Visual Studio?

We are using Visual Studio 2013 with SSDT mainly for versioning t-sql code, so the sql is being developed on the dev server and then we use schema compare to transfer the scripts into visual studio (and check into Git). Before deployment (which we currently do with schema compare, too) we have to replace database and server references (with [$(database)] etc.). If we change the code in the dev server and compare again, such SQLCMD variables are lost again. (I would expect schema compare to be smart enough to retain the SQLCMD variables but I found no way to accomplish this).
The logical step is to develop sql in visual studio from the start. But so far, it has been hard to convince anybody in the team to do that. One can write sql and execute it in VS, no problem. One can also switch to SQLCMD mode and execute, all right. But when you create e.g. a view in VS, you must write down a create statement and of course this can be executed once but will yield an error when altering the view and executing the create statement again.
So my question is if anybody has some essential tips on how to do database development exclusively in Visual Studio. We were able to get the database references and all that straight, but not the development process.
I've been streamlining local database development and deployment using Visual Studio database projects for a few years now. Here are some tips.
In general...
Use local db instances: Each developer should have their own database instance installed locally. All scripts (tables, views, stored procs, etc.) should be developed in Visual Studio. Create a publish profile for deploying the project to the local db instance.
Use Publish feature: Confusingly Visual Studio provides both a Deploy and a Publish option which ultimately do the same thing. I recommend using just Publish because it's more prominent in the UI and you can create profiles to configure the deployment process for various database instances.
Keep local db up to date: When a developer makes changes in the database project and checks them in to source control then the other developers should check out these changes and republish the project to their local databases.
Create vs. Alter statements
All of your statements should be Create statements. There is no need for Alter statements or existence checks. Everything should be scripted as if you are creating the database objects for the first time. When you deploy or publish, VS will know whether to issue Alter statements for existing objects.
Data
Some ideas:
Script your data as a series of Insert statements. Include them in a post-deployment script in the database project. But this can be tedious and error-prone.
Keep a database backup that includes all of your test data. When setting up a development environment for the first time, create the database from the backup. After you make significant changes to the data, create a new backup and have your devs recreate their databases from the backup. In most cases it's ok if the backup is out of sync with the schema defined in the project -- simply republish the project (make sure to turn off the "Re-create database" setting so that only the differences are published and thus the data is not lost).
There may be 3rd party tools to do this in which case they are worth looking in to.
Create your own solution for deploying data. Mine involved the following and worked really nicely (but required a lot of time and effort!):
All data stored in XML files - 1 file per table - whose structure resembled the table
An executable to read the XML files and generate SQL merge (or insert/update) statements for each row of data and save them to a SQL script
A pre-build event in the database project to run the executable and copy the resulting SQL script to a post-deployment script in the project
Publish the project and the data will be pushed during post-deployment
Test/Production Deployments
Publish feature: You can create publish profiles for your test and production environments. However it will include your pre- and post-deployment scripts, and you won't get the versatility that the other options provide.
dacpacs: Ed Elliott covered them in his answer. Advantages: no need for Visual Studio to deploy, they can be deployed via SQL Management Studio or the command line with sqlpackage.exe, they can be easier to work with than a T-SQL deployment script.
Schema Compare: Schema compare may be good if you can use Visual Studio for your deployments and you like to double check all of the changes being deployed. You can also selectively ignore changes which is useful when you aren't lucky enough to have a development environment that completely mirrors production.
An age-old challenge. We've tried to use the data projects as they were defined through the years, but ran into several problems, including the fact that it seemed that these projects changed with every release of Visual Studio.
Now, we use the data project only to integrate with TFS for work item management and source code control. The way we do it so that we can build sprocs/views in Visual Studio is we write each script using the drop/create pattern. Our scripts also contain security (we made the mistake of using the default schema... if I could go back in time we'd segregate schemas and do schema-based role level security).
For table schema, we do schema compares to/from a versioned template database.
A typical stored proc looks like this:
IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[sp_MyStoredProcedure]') AND type in (N'P', N'PC'))
DROP PROCEDURE [dbo].[sp_MyStoredProcedure]
GO
CREATE PROCEDURE [dbo].[sp_MyStoredProcedure]
#MyParameter int
AS
BEGIN
-- Stored Procedure Guts
select 1
END
Good luck... ultimately, it just has to work for your team.
We are currently on the way to move from SSMT to SSDT. I see that we all facing the same problems and it is very strange that there is no good tutorial on the net (at least I haven't found it yet).
First of all about the variables. I think that you need to update to the newest version of SSDT (20015.02) + DacFx. We are using it and we do not have any problems with variables. It also has some new very good features as do not drop some objects on the target if they do not exist in the source.
However we came to solution to use synonyms for all cross database and linked server objects. For example we have table in the AnotherDatabase.dbo.NewTable. We create synonym [dbo].[syn_AnotherDatabase_dbo_NewTable] FOR [$(AnotherDatabase)].[dbo].[NewTable] and use it in the code instead of referencing the other databases. The same with linked servers: CREATE SYNONYM [syn_LinkedDatabase_dbo_NewTable] FOR [$(LinkedServer)].[$(LinkedDatabase)].[dbo].[NewTable].
Now about the development process. We set debug to our dev database in the project properties (later we are going to have separate databases for each developer). Then when you are modifying stored procedures/views/functions/etc... You open the script, change the CREATE to alter and you can work in the same way as you were doing in the SSMT. You can modify the body, execute it, execute queries in that window. However when you finish, you change it back from ALTER to CREATE and save the file.
The problem here is with the objects that does not support ALTER statement. In that case, you need to publish the code first. But in practice you are doing it so not so frequently so I believe that it is not so big deal.
SSDT is mature enough to use it to create your scripts and deploy your changes but you should move away from using the schema compare to doing deployments using sqlpackage.exe
The process looks something like:
-write code in vs/ssdt
-build project which results in a dacpac (either on your machine or ci server)
-deploy dacpac to db instance, using variables if you need to, to bring db's up to date. Use sqlpackage.exe to deploy or generate scripts which can be deployed manually
It should be pretty straight forward, but please ask if you are not sure on anything!
Ed

Recreate database from RedGate checked-in scripts

We've got a SQL Server instance with some 15-20 databases, which we check in TFS with the help of RedGate. I'm working on a script to be able to replicate the instance (so a developer could run a local instance when needed, for example) with the help of these scripts. What I'm worried about is the dependencies between these scripts.
In TFS, RedGate has created these folders with .sql files for each database:
Functions
Security
Stored Procedures
Tables
Triggers
Types
Views
I did a quick test with Powershell, just looping over these folders to execute the sql, but I think that might not always work. Is there a strict ordering which I can follow? Or is there some simpler way to do this? To clarify, I want to be able to start with an completly empty SQL Server instance, and end up with a fully configured one according to what is in the TFS (without data, but that is ok). Using Powershell is not a requirement, so if it is simpler to do some other way, that is preferrable.
If you're already using RedGate they have a ton of articles on how to move changes from source control to database. Here's one which describes moving database code from TFS using sqcompare command-line:
http://www.codeproject.com/Articles/168595/Continuous-Integration-for-Database-Development
If you compare to any empty database it will create the script you are looking for.
The only reliable way to deploy the database from scripts folders would be to use Red Gate SQL Compare. If you run the .sql files using PowerShell, the objects may not be created in the right order. Even if you run them in an order that makes sense (functions, then tables, then views...), you still may have dependency issues.
SQL Compare reads all of the scripts and uses them to construct a "virtual" database in memory, then it calculates a dependency matrix for it so when the deployment script is created, things are done in the correct order. That will prevent SQL Server from throwing dependency-related errors.
If you are using Visual Studio with the database option it includes a Schema Compare that will allow you to compare what is in the database project in TFS to the local instance. It will create a script for you to have those objects created in the local instance. I have not tried doing this for a complete instance.
You might have to at most create the databases in the local instance and then let Visual Studio see that the tables and other objects are not there.
You could also just take the last backup of each database and let the developer restore them to their local instance. However this can vary on each environment depending on security policy and what type of data is in the database.
I tend to just use PowerShell to build the scripts for me. I have more control over what is scripted out when so when I rerun the scripts on the local instance I can do it in the order it needs to be done in. May take a little more time but I get better functioning scripts for me to work with, and PS is just my preference. There are some good scripts already written in the SQL Community that can help you on this. Jen McCown did a blog post of all the post her husband has written for doing just this, right here.
I've blogged about how to build a database from a set of .sql files using the SQL Compare command line.
http://geekswithblogs.net/SQLDev/archive/2012/04/16/how-to-build-a-database-from-source-control.aspx
The post is more from the point of view of setting up continuous integration, but the principles are the same.

Generating database scripts for SQL Server database versioning

In the scope of responsible programming and versioning, I would like to start to version my database changes especially since I am developing on my database instance then moving it to production. I haven't found any thing that truly makes sense to me on how to do this. I am using Visual Studio 2010 Pro as my IDE. Is there a document that makes this process simple and able to detect changes to the database with relative ease? Or what should I change in my workflow to make this easier?
One way that I've successfully done this sort of thing in the past, is via Sql Source Control. Visual Studio does not offer this functionality for you.
Alternatively, you can use SSMS to generate the Database scripts for you and save it as a file; then you can check in the script. You would chose whether you generate the whole DB script in one file or whether you do it on an object by object basis. The syncing part will have to be done by you by executing your scripts in production. In conclusion a total nightmare.
Redgate also offers Sql Compare, which is great for syncing databases. Take a look at their products if you or your company can afford them.
We use our own DB solution in-house which brings all the tools required for proper DB versioning. While I realize that it may not be a perfect solution for everyone, I invite you to have a look at it (it is open-source): bsn ModuleStore
The versioning aspect is as follows: the tool can script out the SQL semi-automatically, and it does reformat the source code to be in an uniform format. The files will therefore always be identical for the same source, no matter of when and by whom something has been scripted; this therefore works nicely with non-locking source control systems (especially SVN, Git or Mercurial).
The reformat puts all statements in the same form (e.g. optional keywords such as AS, INNER, OUTER etc. are dealt with), scripts everything to the "dbo" schema (even if it was in a different one), puts all identifiers into the square braces ([something]), uppercases all reserved words, does the indentation etc.
Besides versioning, the runtime part of the tool can diff the running DB and the CREATE scripts (DB source code) and apply updates automatically for all non-destructive changes (e.g. updating indexes, constraints, views, stored procedures, triggers, custom types, new tables etc.). Destructuve changes have to be scriped manually (table changes which then usually require data transformations). The runtime will make sure that all updates are performed in a transaction and rollback if the resulting DB doesn't match the CREATE scripts, therefore you get the safety of knowing that the DB is exactly on the version required by the application, even if it has been tampered with manually.
Also, multiple "modules" can be used in a single database. Each module is stored as a schema and independent of other schemas, thereby making it possible to add or remove modules from one single DB, and avoiding the need to create multiple databases for different parts of the application. Also, the use of schemas to do this makes sure that there are no name collisions.
It may be worth noting that the toolset has no dependency to the SMO, it is autonomous.
Save Your Database scripts at SVN. Here is the Refernce How to use SVN Tortoise
OR
Save your database script at VSS. Here is the reference What is VSS ? How can we use that ?
In both cases you can keep track of the changes done so that in future you can check the history which in saved in the form of versions.
You can use Red Gate product also
EDIT
How do you pull out what what has changed?
Use comparison feature to check the changes made in the previous versions.
How do I apply the changes to the live database server?
Download the latest file from server.
I hope you are not using the Drop statements for the Table in your consolidated script. As it will delete all records from the table.
Drop statements will take place for Stored Pro, View, Function etc.
Please note that you have to run the complete latest database script file on the production server with below mentioned action plans
1. Remove Drop Statement for Schema DDL
2. Add Drop/Create Statements for Stored Proc/Views
3. Include Alter statements DML of schema.
Hope this will definitely help you.

How do you deal with multiple developers and database changes?

I would like to know how you guys deal with development database changes in groups of 2 or more devs? Do you have a global db everyone access, maybe a local copy and manually apply script changes? It would be nice to see pros and cons that you've noticed for each approach and the number of devs in your team.
Start with "Evolutionary Database Design" by Martin Fowler. This sums it up nicely
There are have been other questions about DB development that may be useful too, for example Is RedGate SQL Source Control for me?
Our approach is that everyone has their own DB, the complete DB can be created from create scripts with base data if required. All the scripts required for this are in source control.
All scripts are CREATE scripts and they reflect the current state of the database schema. Upgrades are in separate SQL files which can upgrade existing DBs from a specific version to a newer one (run sequentially). After all the updates have been applied, the schema must be identical to what you would get from running the setup scripts.
We have some tools to do this (we use SQL Server and .NET):
Scripting is done with a tool which also applies a standard formatting so that the changes are well traceable with text diff tools (and by the SCM)
A runtime module takes care of comparing the existing DB objects, run updates if required, automatically apply "non-destructive" changes, then check the DB objects again to ensure a correct migration before committing the changes
The toolset is available as open-source project (licensed under LGPL), it's called the bsn ModuleStore (note that it is limited to SQL Server 2005/2008/Azure and to .NET for the runtime part).
We use what was code named "Data Dude" - the database features in TFS and Visual Studio - to deal with this. When you "get latest" and bring in code that relies on a schema change, you also bring in the revised schemas, stored procedures etc. You rigght-click the database project and Deploy; that gets your local schema and sp in sync but doesn't overwrite your data. The job of working out the script to get you from your old schema to the new one falls to Visual Studio, not to you or your DBA. We also have "populate" scripts for things like lists of provinces and a deploy runs them for you.
So much better than the old way which always fell apart at high stress times, with people checking in code then going home and nobody knowing what columns to add to make the code work etc.

Resources