How to avoid circular references with visual studio 2012 Database project - sql-server

I am trying to create a database solution in Visual Studio 2012.
I have 5 databases that all exist on the same server, so I have imported each of them into a database project.
Unfortunately there are quite a few objects in the databases that reference objects in other databases, so I have been trying to resolve these by adding a Database Reference to create a Database variable to refer to these cross database instances.
The problem is that I have database A referencing B and also B referencing A, and it won't allow me to add these two references. I get the message box saying "Adding this project as a reference would cause a circular dependency".
Any ideas on approaches to resolving this? I guess one way might be to create a solution for each database, but I would rather not if there is a better way to do it.
We are trying to set up an automated build, so I really need the database projects to compile.

You'll want to extract your database schemas into dacpac files (assuming that you're using SSDT SQLProj files and not the old DBProj files for your projects). You can do this through SSMS or through the SQLPackage command line. Once extracted, put them in some place that all of the projects can hit (and preferably that's still under source control so everyone else can reference it). Add those dacpac files to your projects as database references, probably without the option to use a variable for the DB Name.
I've written up adding database reference on my blog here:
http://schottsql.blogspot.com/2012/10/ssdt-external-database-references.html
One additional note - if you're building new databases from these projects, you'll likely need to go through several passes. Turn off the option to run as a transaction and to fail when there's an error so some objects will be created, repeat as needed until all objects are created. I used a variable in my projects called "DeployType" and set up my pre and post deploy scripts to handle a DeployType of "New" differently so it wouldn't attempt to populate/update any data for "New" builds.

Related

Versioning SQL Server Database with data using SSDT

I have a sql server database already containing data. I want to start versioning it. I know I can use Database project in Visual Studio, and by importing database I can generate sql scripts.
But what about data in the database? I tried to make some Data-Tier Application Files, but when I try to import it in my DB project in Visual Studio I am getting this error:
Import Data-Tier Application File - This operation is not supported for packages containing data
So how do I import data? It has to be some way, because when I am extracting DAC file there is option Extract Schema and Data so there has to be a way to use this data afterwards.
Or maybe post deployement scripts are the only option?
Grettings
Your only option for this at this time is to use post-deploy scripts to populate those tables, taking into account the fact that the scripts need to be able to run multiple times without re-inserting data. A temp table/table variable and a MERGE statement are probably your best bets if you might have changes to the reference data, otherwise a left join might suffice.
Others have tried to include reference data, but it's a pretty hard problem to solve in a manner that works well for everyone. I know others like Ed Elliott have written some stuff that can turn those on/off as needed so you're not always including all reference data every time. You could also look into a post-post-deploy scenario where after your publish and post-deploy, you run a separate script that updates the data from static files. They'd still be in source control, but not necessarily part of your SSDT project. You'd have to remember to run that script in your builds, though.
I know for a while we had a database that solely had the lookup tables populated so we could reference that and do data compares if needed, but that still requires someone to maintain those values in an ongoing manner.

TFS / SSDT Deployment in Multi Environment Scenario

This is the scenario I am currently experiencing.
In Development environment, developers usually make changes in the DEV SQL Server, then they will do a schema compare in Visual Studio 2013 / TFS, update the TFS then check the changes in.
Now, say in DEV, there are many stored procedures in a database that refer to a database called A, however in SIT environment this database is called B.
When I want to deploy these stored procedures from TFS to the SIT environment, is there a (automated) way to replace database A to database B, so that the stored procs do not break in SIT?
The workaround that I did was that I generated the publish script (via TFS > Publish > Generate Script), then copy and paste that script to SSMS, replace all reference to database A with database B.
However, this is quite manual (and not foolproof - have to be really careful what to replace), so I am wondering if there is a feature/capability to do this exercise in more efficient manner?
Thanks in advance.
Cheers
There is a functionality for that, but it might require some significant changes in your workflow.
You can use SQL Server database projects in SSDT to store database code. In this case, you can declare a project-level variable with the complementary database name, and then reference its objects in your SSDT project using SQLCMD syntax.
Or better yet, you can create projects for both databases and add the latter's DACPAC file as an external reference to the former. It will create a corresponding SQLCMD variable automatically and will make Intellisense available for linked database' objects.
During deployment, you can generate the publish script with changes and update only the value of this SQLCMD variable in the beginning of the publish script.
Of course, this approach requires that all changes in databases should be introduced in SSDT projects first and deployed to actual instances later. However, benefits far outweigh the additional hassle.
If you would like to continue using the connected workflow to edit your database, you may like to consider using synonyms in your database: instead of using a 3-part reference, which may change depending on the environment you're deploying to, you can put the variable part into the synonym and keep the contents of the stored procedure static.
In order to do so, you would need to start by creating a synonym for each table that you want to reference across database boundaries, e.g.
CREATE SYNONYM [dbo].[Syn_MyTable] FOR [$(OtherDb)].[dbo].[MyTable]
Then, in place of the table names, reference the synoynms in your procs e.g.:
CREATE PROCEDURE MyProc AS
SELECT ID FROM [Syn_MyTable]
Even with this method in place, the caveat is that you'll still be going against the grain if you try to follow this kind of workflow with SSDT, as SSDT is primarily a disconnected database editing tool. For example, if using schema compare to bring the changes into the project from your development environments, it's possible that SSDT will see changes to the synonyms if the destination database name differs. So special care is needed to ensure that the variable syntax within your project's synonym definition is not overwritten.
An alternate approach to database development is supported by a product we make at Redgate called ReadyRoll. ReadyRoll is a project sub-type of SSDT which actually favours a connected editing workflow. For example, with ReadyRoll, if you use a synonym, then it will ignore any differences in the database reference when importing changes into your project (as these are treated as variable by nature).
You can read more about how synonyms work in ReadyRoll, including a sample project, on this forum post:
https://forums.red-gate.com/viewtopic.php?f=199&t=79564&sid=314391978c186c19e50d9d69f266a700

Creating a generic update/install script from a Sql Server Database Project

Can you create a generalized deployment script from a Sql Server Db Project in VS 2015 that doesn't require a schema compare / publish against a specific target database?
Some background:
We are using Sql Server Database projects to manage our database schema. Primarily we are using the projects to generate dacpacs that get pushed out to our development environments. They also get used for brand new installations of our product. Recently we have developed an add-on to our product and have created a new db project for it, referencing our core project. For new installations of our product where clients want the add-on, our new project will be deployed.
The problem we are having is that we need to be able to generate a "generic" upgrade script. Most of our existing installations were not generated via these projects and all contain many "custom" stored procedures/etc specific to that client's installation. I am looking for a way to generate a script that would do an "If Not Exists/Create + Alter" without needing to specify the target database.
Our add-on project only contains stored procedures and a couple tables, all of which will be new to any client opting for this add-on. I need to avoid dropping items not in the project while being able to deploy all of our new "stuff". I've found the option to Include Composite Objects which I can uncheck so that the deployment is specific to our add-on, but publishing still requires me to specify a target database so that a schema compare can be performed and I get scripts that are specific to that particular database. I've played with pretty much every option and cannot find a solution.
Bottom Line: Is there a way for me to generate a generic script that I can give to my deployment team whenever the add-on is requested on an existing install without needing to do a schema compare or publish for each database directly from the project?
Right now I am maintaining a separate set of .sql files in our (non db) project following the if not exists/create+alter paradigm that match the items in the db project. These get concatenated during build of our add on so that we can give our deployment team a script to run. This is proving to be cumbersome and we'd like to be able to make use of the database projects for this, if at all possible.
Best solution is to give the dacpacs to your installers. They run SQLPackage (maybe through a batch file or PowerShell) to point it at the server/DB to update. It would then generate the script or update directly. Sounds like they already have access to the servers so should be able to do this. SQLPackage should also be included on the servers or it can be run locally for the installer as long as they can see the target DB. This might help: schottsql.wordpress.com/2012/11/08/ssdt-publishing-your-project
There are a couple of examples of using PowerShell to do this, but it depends on how much you need to control DB names or Server names. A simple batch file where you edit/replace the Server/DB Names might suffice. I definitely recommend a publish profile and if this is hitting customer databases they could have modified, setting the "do not drop if not in project" options that show up is almost essential. As long as your customers haven't made wholesale changes to core objects, you should be good to go.

Can I Manage Only a Portion of my Database with a Database Project?

We have a third-party SQL Server 2008 database that we have extended with our own tables, sprocs, etc. We would like to use a Visual Studio Database Project to manage our extension objects, but NOT the objects that are part of the third-party database.
If I create a project with only our objects in it, when I go to deploy they error out because VS thinks that the tables they are referencing (which are part of the original database) do not exist (because they are not part of the project).
I tried to create a DACPAC for the original database and just reference that, but there are new kinds of objects there, it looks like, which can't be pushed into it. I also have tried to just do a full schema compare and add all the third-party db objects into my project, but there are so many objects it appears to bomb VS. I will try that again today using a local database to see if perhaps there was a network issue contributing to that problem.
I'm not opposed to turning off those kinds of errors, if that is possible. I'd appreciate any suggestions.
So the answer was the DACPAC. I had tried with two different databases, to use the tool that is in the context menu for databases in SSMS, where you Extract a data-tier application, and both those databases failed. Following another post on S.O. I discovered this command-line tool, SqlPackage, that ships with SQL Server 2008. Here is a link that explains how to utilize it. Anyway I ran it against each of those databases and it didn't even hiccup. Nailed them both.
For me the application was located at: C:\Program Files (x86)\Microsoft SQL Server\110\DAC\bin
After you create the DACPAC files, copy them somewhere in your project, then do "Add Database Reference" either from the project menu or context-menu for the References node in the Solution Explorer. Add the reference, and remove the variable name it gives you. Select "Same Database" in the drop-down and you will see at the bottom what a sample query will look like. Hit OK and it sets it all up for you. All the referential errors disappear. Problem solved.

How can I set my deployment options to script the incremental release of a Visual Studio 2010 database project?

I've just started using a VS2010 database project to manage the release of an update to an existing database.
I want the deployment option to generate a script that will contain the commands to change my existing database rather than create an entirely new one.
E.g I have 10 existing tables - one of which I drop in the new version and I create some new sprocs. I only want the deploy to script the Drop table and Create Procedure commands.
I am using VS2010 Premium.
Is there a recommended standard approach I could follow to managing DBs in a project from initial creation to incremental releases?
Thanks!
There is an "Always re-create database" in the project's .sqldeployment file. Unchecking this option will result in an auto-generated SQL script that will incrementally update your database without dropping it first.
There is also an option to "Generate DROP statements for objects that are in the target databse but that are not in the database project." You will need to check this option, if you want tables, stored procs, etc. to get dropped if you've deleted them in the database project. This will delete any table, etc. that users may have created on their own for testing, debugging, etc.
To change the options in the .sqldeployment file. Open the file in Visual Studio. Either expand the database project in the solution explorer, the double click on the .sqldeployment file (it will probably be in the Properties folder under the DB project). Or open the properties page for the database project and click the "Edit..." button next to the "Deployment configuration file". Check or uncheck the options you want when the database deploys.
I use VSDBCMD.exe for 1-click build & deploy scripts I've created. It works very well. VSDBCMD uses a .sqldeployment file -- the default .sqldeployment file is specified in the .deploymanifest file, but it can be overridden by specifying it as a parameter when executing VSDBCMD. Also, I believe that Visual Studio uses VSDBCMD under the covers when
it deploys the database project, but I just assume that to be the case since the functionality is pretty much identical.
I asked a similar question a while back on the MSDN Forums and was told that the recommended way to do this is to use VSDBCMD. Basically, you output a schema file from your database project which contains all of the information about your database, and then you run VSDBCMD to compare your schema to the target database. This in turn creates the script to update the target database to your current schema.
The rationale for this approach is that just because you and I may think we know what the target database's schema looks like we can't really be sure until we let VSDBCMD run the comparison. Who knows, someone else may have modified the schema in the target database without our knowledge, so our change script may end up failing for some unknown reason.
I really wasn't terribly satisfied with this approach and ended up continuing to use my "old approach" of hand-coding my change scripts when necessary, but I am eager to see if anything has changed in 2010 that makes this a bit easier to work with. I'd really like to see a simple API that does what VSDBCMD does so I can put a GUI together to simplify updating a target (in my case, client) database without the person running the upgrade having to be a DBA.

Resources