How to update sql database during deployment, not create new - sql-server

I have a Visual Studio 2010 website project that I'm creating a Deployment Package for. My website relies on a SQL database that I also include in the deployment package. Importing the deployed package in IIS on the production server worked great during the first deployment, when there wasn't already a database. For the second+ deployments, I get an error during the import that says there is already an object named such and such. It looks like it's trying to create the database again instead of updating the database schema since the last deployment.
I'm creating the deployment package and copying the zip to the production server and importing directly into IIS via WebDeploy 2.1. How can I just update the database? I tried having a Visual Studio sql database project and having the package include just the generated .sql file from the "Deploy" of that project, but was meet with other un-related problems. It just can't be this difficult. How should I deploy database schema updates?

You'll need to deploy database changes using custom scripts (if you want to preserve data).
Or, if you just want to drop and recreate certain object you can do this using automated scripts.
Either way, it's not as easy as pushing out the original database.
If you dont have database changes you can exclude the database from the deploy (Package/Publish Web tab - untick database).
Please see the following article http://msdn.microsoft.com/en-us/library/dd465343.aspx
Pay special attention to the "Redeploying By Using Automatically Generated Scripts" and "Deploying Database Changes by Using Custom Scripts" sections.

Related

dacpac file Publish error to LocalDB: "The element cannot be deployed. This element contains state that cannot be recreated in the target database."

Pretty simple you would think, but as I cannot edit the schema in this database I have no idea how to get past this error when I am publishing my dacpac file to my local database. I am trying to take a copy of a database that is hosted in Azure and have it locally for my own development purposes. I am not a sysadmin of the database, but I have complete access to it other than that. It is a production database so I can't mess anything up for obvious reasons.
I had a hell of a time even getting this dacpac file created in the first place. I was getting far more errors/warnings when trying to export as a bacpac file with data (which is what I really want to do, but I can worry about that later).
Here is the command I am trying:
SqlPackage.exe /Action:Publish /SourceFile:" C:\Data\opkCore.dacpac" /TargetConnectionString:"Data Source=(localdb)\MSSQLLocalDB;Initial Catalog=opkCore; Integrated Security=true;"
This is what I used to create the dacpac file:
sqlpackage.exe /Action:Export /ssn:tcp:<MyDatabase>.database.windows.net,1433 /sdn:opkCore /su:<MyUserName> /sp:<MyPassword> /tf:C:\Data\opkCore.bacpac
I have tried other solutions such as:
Export Data-tier Application, but I am limited to only doing it in an Azure container and I am not in control of that. It is a Pay-As-You-Go model which does not support blob storage apparently
Copy Database only works for 2005 and earlier and this is SQL 2019
Deploy Database to Microsoft SQL Server Azure SQL Database
Import Data-tier Application, same problem as #1
Exporting bacpac file using SqlPackage.exe - Errors all over the place that I cannot fix. The database is not mine to mess up
I CAN export tables one at a time, but then I am missing certain bits of schema that work together, so I get errors there also.
I really should be able to just get a local copy of the database in the EXACT same state that it currently is on our production server. Any other ideas for me on how I can do this that will ignore problems with the database and just get me a local copy the EXACT way the database is in production? 3rd party tools that do this or anything?
I decided to just script all the tables and run the script on my new DB. There were a lot of errors, but it did what it could which was 99.99% of the database schema and that is good enough for my purpose. Maybe I will try to get the data exported and imported as well.
EDIT: To export the data I just used SSMS Export data and the destination used was the new LocalDB database I just created from the scripts.

How to update an SSDT project from the command line?

I hope to be able to use SSDT (SQL Server Data Tools) to put our database schema under version control. Importing a database into an SSDT project in Visual Studio creates a nice textual representation of the database schema, suitable for versioning.
Now, the question is, when changes are made to the database schema - how can we programmatically, or from the command line, update (or re-import) the SSDT project?
You can use the built-in schema compare tool to do this, but it seems that you are not developing the way SSDT database projects want you to. It's designed for offline editing, which means you need to first edit the .sql files in your project, and use F5 to deploy to your dev DB for testing.
If you'd prefer to keep working connected, you might want to try SQL Change Automation, a tool we offer here at Redgate. This has a one-click import option that automates the pulling down of database changes to your database project.

What happens when we publish the database project through visual studio

I have been working on a project which has a database project in it and I used to publish that database when ever I made some changes to the scripts. Now that I noticed that when I publish the database project it builds first and creates a dacpac file and then it publishes after I selects the target database. I am interested in knowing what role does that dacpac file plays in publishing the sql database.
Also I have found this thing when I was trying to read about pro's and con's about dacpac. Is it really works like that?
Link
The biggest problem with DACPACs has to do with the way a data-tier application is released to push version changes from the DAC into SQL Server. This is done by creating a new database with a temporary name, generating the new objects in the database, and then moving all the data from the existing database to the new one. After all the data has been transferred and the post-release scripts run, the existing database is dropped and the new database is given the correct name.
The dacpac file is the compiled build output of the database project. It's analogous to a .dll file built from a C# class library project. All of the information you defined in your database project about your database is stored in the dacpac file, along with information about the relationships between the objects.
When a dacpac file is published, the target database is compared to the dacpac and the tool will figure out what T-SQL to execute to make the target database match the dacpac's definition.
Regarding the article, note that the Data-Tier Application Framework that shipped with SQL Server 2008 R2 was largely rewritten/replaced for SQL Server 2012, so that article, while correct regarding that very old version of the Data-Tier Application Framework, is not correct regarding the tools available today.
The DACPAC file is a Zip file contains an XML representation of your database schema. It does not contain any table data (unless you provide pre-and-post deployment scripts). More information is available here: https://www.simple-talk.com/sql/database-delivery/microsoft-and-database-lifecycle-management-(dlm)-the-dacpac/
When a DACPAC is deployed, the receiving server compares the difference between the current schema and then updates your schema accordingly by generating a change script. However, be careful, as some changes can be very expensive (such as adding a new column in the middle of a table that already has millions of rows).
The article I linked to shows you how you can view the generated change script and see what happens. Repeated here is a snippet that does it:
"%ProgramFiles(x86)%\Microsoft SQL Server"\110\DAC\bin\sqlpackage.exe
/Action:Script
/SourceFile:MyPathAndFileToTheDacPac
/TargetConnectionString:"Server=MyTargetInstance;Database=MyTargetDatabase;Integrated Security=SSPI;"
/OutPutPath:"MyPathAndFile.sql"
Using DACPACs and Database Projects (in SSDT, but do not use SQL Server Management Studio) is the preferred way of pushing database changes now as it is less error-prone than manually redesigning tables using the table designer (which will drop-recreate-and-repopulate tables if you do things like add non-terminal columns to existing tables).
I'm not too familiar with it but played around with some database uploads myself. From what I gathered the dacpac has settings that can be used and uploaded. I found these instructions:
•To create a database project based on a dacpac, create a new SQL Server Database Project in Visual Studio. Then right-click on the project in Solution Explorer and choose "Import -> Data-tier Application (*.dacpac)" and select your dacpac. That will convert the contents of the dacpac into scripts in the project, and if you choose "Import database settings" the database options will be set based on the settings in the dacpac.
Dacpac is A data-tier application (DAC) is a logical database management entity that defines all of the SQL Server objects - like tables, views, and instance objects, including logins – associated with a user’s database. A DAC is a self-contained unit of SQL Server database deployment that enables data-tier developers and database administrators to package SQL Server objects into a portable artifact called a DAC package, also known as a DACPAC. from https://msdn.microsoft.com/en-us/library/ee210546.aspx
hope this helps...

Azure continuous deployment from GitHub and database upgrades

I have a Web application that I usually deployed using Web Deploy directly from Visual Studio (whatever branch I am currently using in VS - normally master). But now I'm introducing a second web app on Azure that will be built from the same repo but different branch. To make things simpler I will be configuring both Web apps on Azure to integrate directly with GitHub and associate them with specific branch.
I also added two additional web.config files: Web.Primary.config and Web.Secondary.config and configured app settings on Azure portal of each web app by adding additional value SCM_BUILD_ARGS and set them to
SCM_BUILD_ARGS=-p:PublishProfile=Primary // in primary web app
SCM_BUILD_ARGS=-p:PublishProfile=Secondary // in secondary web app
which I understand will transform correct config file with specific external services' configurations (DB connection, mail server, etc.).
Now the additional step that I would like to include in continuous deployment is run a set of SQL scripts that I have in my repo that I used to manually upgrade database during Web Deploy in VS. Individual scripts are actually doing specific database upgrade steps:
backup current tables - backup creates a set of Backup_OriginalTableName tables that are copied from existing ones and populated with existing data
drop whole DB model - all non-backup objects are being dropped from procedures, functions, types, views, tables...
create model - creates all tables, views and indices
create user types
create user functions
create stored procedures
restore data to new tables from backup tables - this step may occasionally break if we introduce new non-nullable columns to tables in the new model don't have defaults defined on them; I will somehow have to mitigate this problem by adding an additional script that will add missing columns to backup tables and give them some defaults, but that's a completely different issue.
I used to also have a set of batch files (BAT) in my VS solution that simply executed sqlcmd against specific database instance and executed these scripts in predefined order (as above). Hence I had batches:
Recreate Local.bat - this one used additional SQL scripts to not restore from backup but rather to recreate an empty DB with only lookup tables being populated and some default data for development purposes (like predefined test users)
Restore Local.bat - I used this script to simply restore database from backup tables discarding any invalid data I may have created while debugging/testing since last DB recreate/upgrade/restore
Upgrade Local.bat - upgrade local development DB executing scripts mentioned above
Upgrade Production.bat - upgrade production DB on Azure executing scripts mentioned above
So to support the whole deployment process I was now doing manually in VS I would now like to also execute these scripts against specific Azure SQL DB during continuous deployment. I suppose I should be running these right after code deployment because if that one fails, DB shouldn't be upgraded either.
I'm a bit confused where and how to do this? Can I configure this somewhere in Azure portal? I was looking for resources on the Web but I can't seem to find any relevant information how to do additional deployment steps to execute these scripts. I think this is some everyday scenario as it's hard to think of web apps not requiring databases these days.
Maybe it's just my process that is wrong for DB upgrade/deployment so let me also know if there is any other normal way that does DB upgrade/migration with continuous deployment on Azure... I may change my process to accommodate for this.
Note 1: I'm not using Entity Framework or any other full blown ORM. I'm rather using NPoco and all my DB logic is built in SPs that DAL is using.
Note 2: I'm aware of recently introduced staging capabilities of Azure, but my apps are on cheaper plan that doesn't support staging and I want to keep it this way as I may be introducing additional web apps along the way that will be using additional code branches and resources (DB, mail etc.)
It sounds to me like your db project is a good candidate for SSDT and inclusion in source control. You can create a MyDB.sqlproj that builds your db as a dacpac, and then you can use SqlPackage.exe Publish to accomplish your deployment to Azure.
We recently brought our databases under source control and follow a similar process to build and automatically deploy them (but not to a SQL Azure DB). We've found the source control, SSDT tooling support, and automated deployment options to be worth the effort of setting up and maintaining our project this way.
This SO question has some good notes for Azure deployment of a dacpac using SSDT:
How to publish DACPAC file to a SQL Server database project via SQLPackage.exe of SSDT?

Proper structure of asp.net website and database in visual studio

My main problem is where does database go?
The project will be on SVN and is developed using asp.net mvc repository pattern. Where do I put the sql server database (mdf file)? If I put it in app_data, then my other team mates can check out the source and database and run it with the database being deployed in the vs instance.
The problem with this method are:
I cannot use SQL Management Studio with this database.
Most web hosts require me to deploy the database using their UI or SQL Management studio. Putting it in App Data will make no sense.
Connection String has to be edited each time I'm moving from testing locally to testing on the web host.
If I create the database using SQL Management studio, my problems are:
How do I keep this consistent with the source control (team mates have to re-script the db if the schema changes).
Connection string again. (I'd like to automatically use the string when on production server).
Is there a solution to all my problems above? Maybe some form of patterns of tools that I am missing?
Basically your two points are correct - unless you're working off a central database everyone will have to update their database when changes are made by someone else. If you're working off a central database you can also get into the issues where a database change is made (ie: a column dropped), and the corresponding source code isn't checked in. Then you're all dead in the water until the source code is checked in, or the database is rolled back. Using a central database also means developers have no control over when databsae schema changes are pushed to them.
We have the database installed on each developer's machine (especially good since we target different DBs, each developer has one of the supported databases giving us really good cross platform testing as we go).
Then there is the central 'development' database which the 'development' environment points to. It is build by continuous integration each checkin, and upon successful build/test it publishes to development.
Changes that developers make to the database schema on their local machine need to be checked into source control. They are database upgrade scripts that make the required changes to the database from version X to version Y. The database is versioned. When a customer upgrades, these database scripts are run on their database to bring it up from their current version to the required version they're installing.
These dbpatch files are stored in the following structure:
./dbpatches
./23
./common
./CONV-2345.dbpatch
./pgsql
./CONV-2323.dbpatch
./oracle
./CONV-2323.dbpatch
./mssql
./CONV-2323.dbpatch
In the above tree, version 23 has one common dbpatch that is run on any database (is ANSI SQL), and a specific dbpatch for the three databases that require vendor specific SQL.
We have a database update script that developers can run which runs any dbpatch that hasn't been run on their development machine yet (irrespective of version - since multiple dbpatches may be committed to source control during a single version's development).
Connection strings are maintained in NHibernate.config, however if present, NHibernate.User.config is used instead, however NHibernate.User.config is ignored from source control. Each developer has their own NHibernate.User.config, which points to their local database and sets the appropriate dialects etc.
When being pushed to development we have a NAnt script which does variable substitution in the config templates for us. This same script is used when going to staging as well as when doing packages for release. The NAnt script populates a templates config file with variable values from the environment's settings file.
Use management studio or Visual Studios server explorer. App_Data isn't used much "in the real world".
This is always a problem. Use a tool like SqlCompare from Redgate or the built in Database Compare tools of Visual Studio 2010.
Use Web.Config transformations to automatically update the connection string.
I'm not an expert by any means but here's what my partner and I did for our most recent ASP.NET MVC project:
Connection strings were always the same since we were both running SQL Server Express on our development machines, as were our staging and production servers. You can just use a dot instead of the computer name (eg. ".\SQLEXPRESS" or ".\SQL_Named_Instance").
Alternatively you could also use web.config transformations for deploying to different machines.
As far as the database itself, we just created a "Database Updates" folder in the SVN repository and added new SQL scripts when updates needed to be made. I always thought it was a good idea to have an organized collection of database change scripts anyway.
A common solution to this type of problem is to have the database versioning handled in code rather than storing the database itself in version control. The code is typically executed on app_start but could be triggered in other ways (build/deploy process). Then developers can run their own local databases or use a shared development database. The common term for this is called database migrations (migrating from one version to the next). Here is a stackoverflow question for .net tools/libraries to make this easier: https://stackoverflow.com/questions/8033/database-migration-library-for-net
This is the only way I would handle this on projects with multiple developers. I've used this successfully with teams of over 50 developers and it's worked great.
The Red Gate solution would be to use SQL Source Control, which integrates into SSMS. Its maintains a sql scripts folder structure in source control, which you can keep in the same folder/ respository that you keep your app code in.
http://www.red-gate.com/products/SQL_Source_Control/

Resources