Creating a generic update/install script from a Sql Server Database Project - sql-server

Can you create a generalized deployment script from a Sql Server Db Project in VS 2015 that doesn't require a schema compare / publish against a specific target database?
Some background:
We are using Sql Server Database projects to manage our database schema. Primarily we are using the projects to generate dacpacs that get pushed out to our development environments. They also get used for brand new installations of our product. Recently we have developed an add-on to our product and have created a new db project for it, referencing our core project. For new installations of our product where clients want the add-on, our new project will be deployed.
The problem we are having is that we need to be able to generate a "generic" upgrade script. Most of our existing installations were not generated via these projects and all contain many "custom" stored procedures/etc specific to that client's installation. I am looking for a way to generate a script that would do an "If Not Exists/Create + Alter" without needing to specify the target database.
Our add-on project only contains stored procedures and a couple tables, all of which will be new to any client opting for this add-on. I need to avoid dropping items not in the project while being able to deploy all of our new "stuff". I've found the option to Include Composite Objects which I can uncheck so that the deployment is specific to our add-on, but publishing still requires me to specify a target database so that a schema compare can be performed and I get scripts that are specific to that particular database. I've played with pretty much every option and cannot find a solution.
Bottom Line: Is there a way for me to generate a generic script that I can give to my deployment team whenever the add-on is requested on an existing install without needing to do a schema compare or publish for each database directly from the project?
Right now I am maintaining a separate set of .sql files in our (non db) project following the if not exists/create+alter paradigm that match the items in the db project. These get concatenated during build of our add on so that we can give our deployment team a script to run. This is proving to be cumbersome and we'd like to be able to make use of the database projects for this, if at all possible.

Best solution is to give the dacpacs to your installers. They run SQLPackage (maybe through a batch file or PowerShell) to point it at the server/DB to update. It would then generate the script or update directly. Sounds like they already have access to the servers so should be able to do this. SQLPackage should also be included on the servers or it can be run locally for the installer as long as they can see the target DB. This might help: schottsql.wordpress.com/2012/11/08/ssdt-publishing-your-project
There are a couple of examples of using PowerShell to do this, but it depends on how much you need to control DB names or Server names. A simple batch file where you edit/replace the Server/DB Names might suffice. I definitely recommend a publish profile and if this is hitting customer databases they could have modified, setting the "do not drop if not in project" options that show up is almost essential. As long as your customers haven't made wholesale changes to core objects, you should be good to go.

Related

Azure continuous deployment from GitHub and database upgrades

I have a Web application that I usually deployed using Web Deploy directly from Visual Studio (whatever branch I am currently using in VS - normally master). But now I'm introducing a second web app on Azure that will be built from the same repo but different branch. To make things simpler I will be configuring both Web apps on Azure to integrate directly with GitHub and associate them with specific branch.
I also added two additional web.config files: Web.Primary.config and Web.Secondary.config and configured app settings on Azure portal of each web app by adding additional value SCM_BUILD_ARGS and set them to
SCM_BUILD_ARGS=-p:PublishProfile=Primary // in primary web app
SCM_BUILD_ARGS=-p:PublishProfile=Secondary // in secondary web app
which I understand will transform correct config file with specific external services' configurations (DB connection, mail server, etc.).
Now the additional step that I would like to include in continuous deployment is run a set of SQL scripts that I have in my repo that I used to manually upgrade database during Web Deploy in VS. Individual scripts are actually doing specific database upgrade steps:
backup current tables - backup creates a set of Backup_OriginalTableName tables that are copied from existing ones and populated with existing data
drop whole DB model - all non-backup objects are being dropped from procedures, functions, types, views, tables...
create model - creates all tables, views and indices
create user types
create user functions
create stored procedures
restore data to new tables from backup tables - this step may occasionally break if we introduce new non-nullable columns to tables in the new model don't have defaults defined on them; I will somehow have to mitigate this problem by adding an additional script that will add missing columns to backup tables and give them some defaults, but that's a completely different issue.
I used to also have a set of batch files (BAT) in my VS solution that simply executed sqlcmd against specific database instance and executed these scripts in predefined order (as above). Hence I had batches:
Recreate Local.bat - this one used additional SQL scripts to not restore from backup but rather to recreate an empty DB with only lookup tables being populated and some default data for development purposes (like predefined test users)
Restore Local.bat - I used this script to simply restore database from backup tables discarding any invalid data I may have created while debugging/testing since last DB recreate/upgrade/restore
Upgrade Local.bat - upgrade local development DB executing scripts mentioned above
Upgrade Production.bat - upgrade production DB on Azure executing scripts mentioned above
So to support the whole deployment process I was now doing manually in VS I would now like to also execute these scripts against specific Azure SQL DB during continuous deployment. I suppose I should be running these right after code deployment because if that one fails, DB shouldn't be upgraded either.
I'm a bit confused where and how to do this? Can I configure this somewhere in Azure portal? I was looking for resources on the Web but I can't seem to find any relevant information how to do additional deployment steps to execute these scripts. I think this is some everyday scenario as it's hard to think of web apps not requiring databases these days.
Maybe it's just my process that is wrong for DB upgrade/deployment so let me also know if there is any other normal way that does DB upgrade/migration with continuous deployment on Azure... I may change my process to accommodate for this.
Note 1: I'm not using Entity Framework or any other full blown ORM. I'm rather using NPoco and all my DB logic is built in SPs that DAL is using.
Note 2: I'm aware of recently introduced staging capabilities of Azure, but my apps are on cheaper plan that doesn't support staging and I want to keep it this way as I may be introducing additional web apps along the way that will be using additional code branches and resources (DB, mail etc.)
It sounds to me like your db project is a good candidate for SSDT and inclusion in source control. You can create a MyDB.sqlproj that builds your db as a dacpac, and then you can use SqlPackage.exe Publish to accomplish your deployment to Azure.
We recently brought our databases under source control and follow a similar process to build and automatically deploy them (but not to a SQL Azure DB). We've found the source control, SSDT tooling support, and automated deployment options to be worth the effort of setting up and maintaining our project this way.
This SO question has some good notes for Azure deployment of a dacpac using SSDT:
How to publish DACPAC file to a SQL Server database project via SQLPackage.exe of SSDT?

Does SSDT bulid script for only changed objects?

I'm currently in the process of redesigning our department's source control strategy using Team Foundation Server (TFS) in regard to database objects. Essentially, we store nothing in TFS at this time. I have discovered SSDT and really enjoy their integration within Visual Studio and think it will make our transition into TFS much easier.
So, Does SSDT have the capability of generating scripts based on the delta's of my SSDT project verses what is in our server? It seems from what I have researched, I will only be able to generate an entire database script.
Requirements (Mind you, our developers do not have ddl access to production):
I cannot drop a database to re-create it
I cannot drop ALL objects like all stored procs to re-create them but only what I need
Tables will need to be altered not dropped and only what has changed
Dacpac's are out of the question
Our best option based on our environment at this time is to use scripts for updates
Our database environment is currently SQL Server 2008 R2. My SSDT version is the latest 2013 that was published in June.
Yes, if you do a publish from the project you will pretty much meet all of these requirements, though dacpacs are built as part of the process. The schema compare and pre/post deploy scripts are stored in the dacpac and the publish reads what "should" be present against what is currently in the database. It then generates a change script of all necessary changes to bring the database in line with the project.
Make sure you use the Refactor/Rename when renaming objects - that will cut down on the table drop/recreate operations. You may want to be careful with the "Drop objects not in the project" options. If you haven't been careful with making sure all objects created in your production server are in your project, you could accidentally drop something important just because someone didn't get it checked in.
There are command lines to the SQLPackage command that can generate change detail reports and scripts that you can use. The scripts need to be run through SQLCMD or in SQLCMD mode, but you can definitely produce scripts pretty easily.

How to merge changes from a source-controlled VS2010 database project to local development DB

We're trying out VS2010 database projects for a new development, using the following dev cycle:
Use Management Studio to develop changes on a local DB instance (using the designers etc)
Use VS2010 schema compare to sync / import these changes to the VSDB project
Check in the VSDB project and run automated build / test etc
When I want to 'get latest' from source control, I then:
Update the VSDB project files from source control
Use Schema Compare to push the changes from the project to my local database instance
This is where it starts to break down... Because schema compare is trying to synchronise the two versions, it attempts to undo any changes I've made to my local database as part of my own feature development.
Obviously, you can tell schema compare to skip changes to the objects I've modified, but sadly this doesn't always work correctly: http://connect.microsoft.com/VisualStudio/feedback/details/564026/strange-schema-compare-behavior-sql-2008-database-projects.
Fundamentally, the problem exists because the definitions in the VSDB project are not automatically synchronised with my local database; thus I need to use Schema Compare to do a 'poor mans merge' every time I get a change.
One possible solution could be to:
Use Schema Compare to sync any changes from my local DB to the VSDB project first
Update the VSDB project from source control (therefore using the source control tooling to do the merge, rather than Schema Compare)
Schema Compare the changes from source control into my local DB instance
...which is far from ideal.
Is RedGate SQL Source Control better in this regard?
What about the new 'Juneau' SQL toolset?
you use 'Deploy' to push source changes to the database. Either Deploy Solution from the top-row Build menu, or you can right-click on the project in the Solution Explorer and select Deploy.
Deploy is configurable in the Project properties.
HTH
Your process is backwards which is why this is difficult. Changes should flow from VSDB to your database, not the other way around. Try this:
Use the designers in Management Studio if you like them but script
out any changes you make and add them into your VSDB project.
Instead of using Schema Compare use the built in Deployment functionality. This will automatically script and deploy the incremental changes to your local database in a single click
Since you mention other possible solutions, I'll elaborate on how our shop manages data structure changes and propogation to dev db's.
For tracking and applying differences, we've written a C# app that effectively abstracts database actions out to classes that we append to an Action list. The engine dynamically loads modules that represent database versions, and adds each item in the module to a list of actions to be performed for that version upgrade, then processes the list. Actions include DataRowInsertAction, TableCreateAction, ColumnModifyAction, etc.
One benefit of using this approach was that we were able to commit standard .cs files to subversion and users can bring their own dev databases up-to-date simply by checking out the latest and runnig it. Another huge advantage is that we can target multiple database engines, since the Actions themselves know what SQL to render based on which database engine is being targeted.
As a side note, we use AdeptSQL to compare databases, and love it. It'll create a complete list of differences, and you can generate a script to go either direction (given Database 'A' and Database 'B', upgrade A to B, or downgrade B to A.)
For a small additional charge, they offer extended functionality to perform a data diff as well.
http://www.adeptsql.com/
The idea behind SQL Source Control is basically to turn the development process on its' head - instead of working with database scripts and pushing the changes to a database, you make the changes to the database and SQL Source Control calculates the deltas and updates the local scripts and allows you to commit the changes to your source control system.
SQL Source Control currently only integrates with SQL Server Management Studio, but there is now a VS package called SQL Connect that you can use in VS 2010 to work in much the same way as in SQL Source Control. http://www.red-gate.com/products/sql-development/sql-connect/index-2

Proper structure of asp.net website and database in visual studio

My main problem is where does database go?
The project will be on SVN and is developed using asp.net mvc repository pattern. Where do I put the sql server database (mdf file)? If I put it in app_data, then my other team mates can check out the source and database and run it with the database being deployed in the vs instance.
The problem with this method are:
I cannot use SQL Management Studio with this database.
Most web hosts require me to deploy the database using their UI or SQL Management studio. Putting it in App Data will make no sense.
Connection String has to be edited each time I'm moving from testing locally to testing on the web host.
If I create the database using SQL Management studio, my problems are:
How do I keep this consistent with the source control (team mates have to re-script the db if the schema changes).
Connection string again. (I'd like to automatically use the string when on production server).
Is there a solution to all my problems above? Maybe some form of patterns of tools that I am missing?
Basically your two points are correct - unless you're working off a central database everyone will have to update their database when changes are made by someone else. If you're working off a central database you can also get into the issues where a database change is made (ie: a column dropped), and the corresponding source code isn't checked in. Then you're all dead in the water until the source code is checked in, or the database is rolled back. Using a central database also means developers have no control over when databsae schema changes are pushed to them.
We have the database installed on each developer's machine (especially good since we target different DBs, each developer has one of the supported databases giving us really good cross platform testing as we go).
Then there is the central 'development' database which the 'development' environment points to. It is build by continuous integration each checkin, and upon successful build/test it publishes to development.
Changes that developers make to the database schema on their local machine need to be checked into source control. They are database upgrade scripts that make the required changes to the database from version X to version Y. The database is versioned. When a customer upgrades, these database scripts are run on their database to bring it up from their current version to the required version they're installing.
These dbpatch files are stored in the following structure:
./dbpatches
./23
./common
./CONV-2345.dbpatch
./pgsql
./CONV-2323.dbpatch
./oracle
./CONV-2323.dbpatch
./mssql
./CONV-2323.dbpatch
In the above tree, version 23 has one common dbpatch that is run on any database (is ANSI SQL), and a specific dbpatch for the three databases that require vendor specific SQL.
We have a database update script that developers can run which runs any dbpatch that hasn't been run on their development machine yet (irrespective of version - since multiple dbpatches may be committed to source control during a single version's development).
Connection strings are maintained in NHibernate.config, however if present, NHibernate.User.config is used instead, however NHibernate.User.config is ignored from source control. Each developer has their own NHibernate.User.config, which points to their local database and sets the appropriate dialects etc.
When being pushed to development we have a NAnt script which does variable substitution in the config templates for us. This same script is used when going to staging as well as when doing packages for release. The NAnt script populates a templates config file with variable values from the environment's settings file.
Use management studio or Visual Studios server explorer. App_Data isn't used much "in the real world".
This is always a problem. Use a tool like SqlCompare from Redgate or the built in Database Compare tools of Visual Studio 2010.
Use Web.Config transformations to automatically update the connection string.
I'm not an expert by any means but here's what my partner and I did for our most recent ASP.NET MVC project:
Connection strings were always the same since we were both running SQL Server Express on our development machines, as were our staging and production servers. You can just use a dot instead of the computer name (eg. ".\SQLEXPRESS" or ".\SQL_Named_Instance").
Alternatively you could also use web.config transformations for deploying to different machines.
As far as the database itself, we just created a "Database Updates" folder in the SVN repository and added new SQL scripts when updates needed to be made. I always thought it was a good idea to have an organized collection of database change scripts anyway.
A common solution to this type of problem is to have the database versioning handled in code rather than storing the database itself in version control. The code is typically executed on app_start but could be triggered in other ways (build/deploy process). Then developers can run their own local databases or use a shared development database. The common term for this is called database migrations (migrating from one version to the next). Here is a stackoverflow question for .net tools/libraries to make this easier: https://stackoverflow.com/questions/8033/database-migration-library-for-net
This is the only way I would handle this on projects with multiple developers. I've used this successfully with teams of over 50 developers and it's worked great.
The Red Gate solution would be to use SQL Source Control, which integrates into SSMS. Its maintains a sql scripts folder structure in source control, which you can keep in the same folder/ respository that you keep your app code in.
http://www.red-gate.com/products/SQL_Source_Control/

How can I set my deployment options to script the incremental release of a Visual Studio 2010 database project?

I've just started using a VS2010 database project to manage the release of an update to an existing database.
I want the deployment option to generate a script that will contain the commands to change my existing database rather than create an entirely new one.
E.g I have 10 existing tables - one of which I drop in the new version and I create some new sprocs. I only want the deploy to script the Drop table and Create Procedure commands.
I am using VS2010 Premium.
Is there a recommended standard approach I could follow to managing DBs in a project from initial creation to incremental releases?
Thanks!
There is an "Always re-create database" in the project's .sqldeployment file. Unchecking this option will result in an auto-generated SQL script that will incrementally update your database without dropping it first.
There is also an option to "Generate DROP statements for objects that are in the target databse but that are not in the database project." You will need to check this option, if you want tables, stored procs, etc. to get dropped if you've deleted them in the database project. This will delete any table, etc. that users may have created on their own for testing, debugging, etc.
To change the options in the .sqldeployment file. Open the file in Visual Studio. Either expand the database project in the solution explorer, the double click on the .sqldeployment file (it will probably be in the Properties folder under the DB project). Or open the properties page for the database project and click the "Edit..." button next to the "Deployment configuration file". Check or uncheck the options you want when the database deploys.
I use VSDBCMD.exe for 1-click build & deploy scripts I've created. It works very well. VSDBCMD uses a .sqldeployment file -- the default .sqldeployment file is specified in the .deploymanifest file, but it can be overridden by specifying it as a parameter when executing VSDBCMD. Also, I believe that Visual Studio uses VSDBCMD under the covers when
it deploys the database project, but I just assume that to be the case since the functionality is pretty much identical.
I asked a similar question a while back on the MSDN Forums and was told that the recommended way to do this is to use VSDBCMD. Basically, you output a schema file from your database project which contains all of the information about your database, and then you run VSDBCMD to compare your schema to the target database. This in turn creates the script to update the target database to your current schema.
The rationale for this approach is that just because you and I may think we know what the target database's schema looks like we can't really be sure until we let VSDBCMD run the comparison. Who knows, someone else may have modified the schema in the target database without our knowledge, so our change script may end up failing for some unknown reason.
I really wasn't terribly satisfied with this approach and ended up continuing to use my "old approach" of hand-coding my change scripts when necessary, but I am eager to see if anything has changed in 2010 that makes this a bit easier to work with. I'd really like to see a simple API that does what VSDBCMD does so I can put a GUI together to simplify updating a target (in my case, client) database without the person running the upgrade having to be a DBA.

Resources