A similar question about comparison of TEST DACPAC (READY) <=> Production DACPAC with intent of deployment to Production DB Server on SQL Server forums two years ago.
And apparently it was suggested by someone from MS, it is NOT recommended. IT this still relevant? We were hoping to automate our deployments, to achieve continuous build and deployment using DACPAC comparisons.
If you think it is not recommended to use DACPAC, then please provide reason why? and what would you suggest instead?
URL Link to Original SQL Forum question here:
https://social.msdn.microsoft.com/Forums/sqlserver/en-US/a1e5fb60-3283-4acc-b793-cb28e327dd39/using-dacpac-files-in-an-integrated-deployment-process?forum=ssdt
Comparing dacpac to dacpac is okay, but more things can go wrong when doing this. For example, the target platform of the "production dacpac" could be different from the actual platform of your production database, which could result in the generated script containing T-SQL that doesn't work on your production server. Or the database options specified in the "production dacpac" may not match the actual database options of your production database, which can again result in the generated script containing T-SQL that won't work on your production database.
Worst of all, the "production dacpac" might not be equivalent to the production database -- i.e. some objects can exist in the production database that aren't in the "production dacpac" or vice-versa -- which could result in unintended side effects like schema or data loss.
This is why, in general, we recommend deploying or generating a deployment script by using the production database as the target directly instead of dacpac to dacpac comparison.
What you can also do is register the database as a DAC and you can detect drift before releasing your code. If drift is detected you can either
Retrofit the changes made in production back into your source code
Accept the change will be overwritten which means you need to reregister the database as a DAC and then deploy.
By having a standard deployment process from dev to prod you can use drift detection to see if someone has done something they shouldnt have and you can trust the environment hasnt changed. I wouldnt suggest doing a dacpac vs dacpac compare.
Related
I'm working on an ancient project made in VS2005 that deploys a database project to SQL Server 2005.
The project has a database name configured in the Target Database field in the Build tab of the project properties. Testing the connection confirms that the settings are correct.
I usually overwrite this database using SSMS with backups downloaded from different production servers, then rebuild the database project to generate an SQL script with the statements required to apply recent schema changes to the production DB.
I'm not sure what might have changed, but recently it started to generate a different DB named like this: ProjectName_DB_GUID, interfering with the creation of the diff script.
It also deleted many rows in some tables, even though Block Incremental Deployment if data loss might occur was checked.
This apparently happens only with some of the downloaded databases, but so far I haven't been able to pinpoint the exact cause.
What might I be missing here? What can I try to diagnose the problem and restore the original behavior?
I have been working on a project and gotten it through the first stage. However, the requirments ended up changing and I have to add new tables and redo some of the foriegn key references in the DB.
The problem I have is my lack of knowledge of dealing with doing this kind of change to a staging then production database once I get the development done on dev database.
What are some strategies for migrating database schema changes and maintaining data in the database?
About as far as my knowledge is on doing this is open up Sql Server Management Studio and starting adding tables manually. I know this is probably a bad way to do it so looking for how to do it properly while realizing I probably started out wrong.
For maintaining schema changes you can use ApexSQL Diff, a SQL Server and SQL Azure schema comparison and synchronization tool, and for maintaining data in the database you can use ApexSQL Data Diff, a SQL Server and SQL Azure data comparison and synchronization tool.
Hope this helps
Disclaimer: I work for ApexSQL as a Support Engineer
You have to have something called as a "KIT". Obviously, if you are maintaining some kind of a source control, all the scripts for the changes that you do in the development environments should be maintained in the source control configuration tool.
Once you are done with all the scripts/changes that you deem certified to move to next higher environment. Prepare the kit with having all these scripts in folders (ideally categorized as Procedures, Tables, Functions, Bootstraps) And then have a batch files that could execute these scripts in the kit in a particular order using OSQL command line utility.
Have separate batch files for UAT/ Staging/ production so that you can just double click on the batch file to execute the kit in the appropriate server. Check for OSQL options.
This way all your environments are in sync!
I typically use something like the SQL Server Publishing Wizard to produce SQL scripts of the changes. That is a rather simple and easy approach. The major downside with that tool is that the produced will drop and recreate tables that are not changed but used by procedures that have changed (and I can't understand why), so there is some manual labour involved in going through the script and remove things that don't need to be there.
Note that you don't need to download and install this tool; you can launch it from within Visual Studio. Right-click on a connection in the Server Explorer and select "Publish to Provider" in the context menu.
Red Gate SQL Compare and SQL Data Compare all the way. Since my company bought it, it saved me tons of time staging our databases from DEV to TEST to ACCEPTANCE to PRODUCTION.
And you can have it synchronize with a scripts folder too for easy integration in a source control system.
http://www.red-gate.com
You might want to check out a tool like Liquibase: http://liquibase.org/
You can use visual studio 2015. Go to Tools=> SQL server => New Schema comparison
step 1) Select source and target Database.
Click on Compare option.
step 2) once comparison completed, you can click on icon Generate Script(Shift+alt+G)
this will generate Commit script.
step 3) To generate rollback script for database changes just swap database from step 1
There are some tools available to help you with that.
If you have Visual Studio Team edition, check database projects (aka DataDude aka Visual Studio Team for Database Professionals) See here and here
It allows you to generate a model from the dev/integration database and then (for many, but not all cases) automatically create scripts which update your prod database with the changes you made to dev/integration.
For VS 2008, make sure you get the GDR2 patches.
We have found the best way to push changes is to treat databases changes like code. All changes are in scripts, they are in source control and they are part of a version. Nothing is ever under any circumstances pushed to prod that is not scripted and in source control. That way you don't accidentally push changes that are in dev, but not yet ready to be pushed to prod. Further you can restore prod data to the dev box and rerun all the scripts not yet pushed and you have fresh data and all the dev work preserved. This also works great when you have lookup values to tables that are chaging that you don;t want pushed to prod until other things move as well. Script the insert and put it with the rest of the code for the version.
It's nice to use those tools to do a compare to see if something is missed in the scripts, but I would NEVER rely on them alone. Far too much risk of pushing something "not yet ready for prime time" to prod.
A good database design tool (such as Sybase Powerdesigner) will allow you to create the design changes to the data model, then generate the code to implement those changes. You can then store and run the code as you choose. This tool should also be able to do reverse engineering when you inherit a database you didn't build.
Finding all the changes between development and production is often difficult even in an organized, well-documented environment. Idera has a tool for SQL Server which will detect structural differences between your development and production database and another tool which detects changes in the data. In fact, I often use these to go the other direction and sync development with production to start a new project.
Say I have a website and a database of that website hosted locally on my computer (for development) and another database hosted (for production)...ie first I do the changes on the dev db and then I do the changes to the prod DB.
What is the best way to transfer the changes that I did on the local database to the hosted database?
If it matters, I am using MS Sql Server (2008)
The correct way to do this with Visual Studio and SQL Server is to add a Database Project to the web app solution. The database project should have SQL files that can recreate the entire database completely on a new server along with all the necessary tables, procedures users and roles.
That way, they are included in the source control for all the rest of the code as well.
There is a Changes sub-folder in the Database Project where I put the SQL files that apply any new alterations or additions to the database for subsequent versions.
The SQL in the files should be written with proper "if exists" blocks such that it can be run safely multiple times on an already updated database without error.
As a rule, you should never make your changes directly in the database - instead modify the SQL script in the project and apply it to the database to make sure your source code (the SQL files) is always up to date.
We do this in the (Ruby on) Rails world by writing "migrations," which capture the changes you make to the DB structure at each point. These are run with a migration tool (a task for rake), which also writes to a DB table so it knows whether a particular migration has been run or not.
You could make a structure like this for your dev platform (.Net?), but I think that in other answers to this question people will suggest available tools for handling database versioning in your development platform, or perhaps for your specific DB.
I don't know any of these, but check out this list. I see a lot of pay things out there, but there must be something free. Also check this out.
I migrate changes via change scripts written by developers when they have tested/verified their changes. (The exception being moving large data.) All scripts are stored in a Source control system. and can be verified by DBAs.
It is manual, sometime time consuming but effective, safe and controled process.
Databases are too vital to copy from dev.
There are tools to help create/verify these scripts.
See http://www.red-gate.com/
I have used their tools to compare 2 databases to create scripts.
Brian
If the changes are small, I sometimes make them by hand. For larger changes, I use Red Gate's SQL Compare to generate change scripts. These are hand-verified and run in the QA environment first to make sure they don't break anything. For large changes, we run a special backup prior to making the change both in QA and in production.
We used to use the approach provided by Ron. It makes sense for a big project with dedicated team of DBAs. But if you do not have a dedicated developers who write code only for DB this approach is time and resource expensive.
The approach to use RedGate DB compare is also not good. You still have a do a lot of manual work you can skip some step by mistake.
It needs something better. This is was the reason why we built the "Agile DB Recreation/Import/Reverse/Export tool"
The tool is free.
Advantages: your developers use any prefered tools to develop DEV DB. Then they run the DB RIRE and it makes reverseengeniring DB (tables, views, stor proc, etc) and export data into XML files. XML files you can keep in the any code repository system.
And the second step is to run DB RIRE one more time to generate difference scripts between structure and data in XML files and in Production DB.
Of course you can make as much iterations as you need.
I have read lots of posts about the importance of database version control. However, I could not find a simple solution how to check if database is in state that it should be.
For example, I have a databases with a table called "Version" (version number is being stored there). But database can be accessed and edited by developers without changing version number. If for example developer updates stored procedure and does not update Version database state is not in sync with version value.
How to track those changes? I do not need to track what is changed but only need to check if database tables, views, procedures, etc. are in sync with database version that is saved in Version table.
Why I need this? When doing deployment I need to check that database is "correct". Also, not all tables or other database objects should be tracked. Is it possible to check without using triggers? Is it possible to be done without 3rd party tools? Do databases have checksums?
Lets say that we use SQL Server 2005.
Edited:
I think I should provide a bit more information about our current environment - we have a "baseline" with all scripts needed to create base version (includes data objects and "metadata" for our app). However, there are many installations of this "base" version with some additional database objects (additional tables, views, procedures, etc.). When we make some change in "base" version we also have to update some installations (not all) - at that time we have to check that "base" is in correct state.
Thanks
You seem to be breaking the first and second rule of "Three rules for database work". Using one database per developer and a single authoritative source for your schema would already help a lot. Then, I'm not sure that you have a Baseline for your database and, even more important, that you are using change scripts. Finally, you might find some other answers in Views, Stored Procedures and the Like and in Branching and Merging.
Actually, all these links are mentioned in this great article from Jeff Atwood: Get Your Database Under Version Control. A must read IMHO.
We use DBGhost to version control the database. The scripts to create the current database are stored in TFS (along with the source code) and then DBGhost is used to generate a delta script to upgrade an environment to the current version. DBGhost can also create delta scripts for any static/reference/code data.
It requires a mind shift from the traditional method but is a fantastic solution which I cannot recommend enough. Whilst it is a 3rd party product it fits seamlessly into our automated build and deployment process.
I'm using a simple VBScript file based on this codeproject article to generate drop/create scripts for all database objects. I then put these scripts under version control.
So to check whether a database is up-to-date or has changes which were not yet put into version control, I do this:
get the latest version of the drop/create scripts from version control (subversion in our case)
execute the SqlExtract script for the database to be checked, overwriting the scripts from version control
now I can check with my subversion client (TortoiseSVN) which files don't match with the version under version control
now either update the database or put the modified scripts under version control
You have to restrict access to all databases and only give developers access to a local database (where they develop) and to the dev server where they can do integration. The best thing would be for them to only have access to their dev area locally and perform integration tasks with an automated build. You can use tools like redgates sql compare to do diffs on databases. I suggest that you keep all of your changes under source control (.sql files) so that you will have a running history of who did what when and so that you can revert db changes when needed.
I also like to be able to have the devs run a local build script to re initiate their local dev box. This way they can always roll back. More importantly they can create integration tests that tests the plumbing of their app (repository and data access) and logic stashed away in a stored procedure in an automated way. Initialization is ran (resetting db), integration tests are ran (creating fluff in the db), reinitialization to put db back to clean state, etc.
If you are an SVN/nant style user (or similar) with a single branch concept in your repository then you can read my articles on this topic over at DotNetSlackers: http://dotnetslackers.com/articles/aspnet/Building-a-StackOverflow-inspired-Knowledge-Exchange-Build-automation-with-NAnt.aspx and http://dotnetslackers.com/articles/aspnet/Building-a-StackOverflow-inspired-Knowledge-Exchange-Continuous-integration-with-CruiseControl-NET.aspx.
If you are a perforce multi branch sort of build master then you will have to wait till I write something about that sort of automation and configuration management.
UPDATE
#Sazug: "Yep, we use some sort of multi branch builds when we use base script + additional scripts :) Any basic tips for that sort of automation without full article?" There are most commonly two forms of databases:
you control the db in a new non-production type environment (active dev only)
a production environment where you have live data accumulating as you develop
The first set up is much easier and can be fully automated from dev to prod and to include rolling back prod if need be. For this you simply need a scripts folder where every modification to your database can be maintained in a .sql file. I don't suggest that you keep a tablename.sql file and then version it like you would a .cs file where updates to that sql artifact is actually modified in the same file over time. Given that sql objects are so heavily dependent on each other. When you build up your database from scratch your scripts may encounter a breaking change. For this reason I suggest that you keep a separate and new file for each modification with a sequence number at the front of the file name. For example something like 000024-ModifiedAccountsTable.sql. Then you can use a custom task or something out of NAntContrib or an direct execution of one of the many ??SQL.exe command line tools to run all of your scripts against an empty database from 000001-fileName.sql through to the last file in the updateScripts folder. All of these scripts are then checked in to your version control. And since you always start from a clean db you can always roll back if someones new sql breaks the build.
In the second environment automation is not always the best route given that you might impact production. If you are actively developing against/for a production environment then you really need a multi-branch/environment so that you can test your automation way before you actually push against a prod environment. You can use the same concepts as stated above. However, you can't really start from scratch on a prod db and rolling back is more difficult. For this reason I suggest using RedGate SQL Compare of similar in your build process. The .sql scripts are checked in for updating purposes but you need to automate a diff between your staging db and prod db prior to running the updates. You can then attempt to sync changes and roll back prod if problems occur. Also, some form of a back up should be taken prior to an automated push of sql changes. Be careful when doing anything without a watchful human eye in production! If you do true continuous integration in all of your dev/qual/staging/performance environments and then have a few manual steps when pushing to production...that really isn't that bad!
First point: it's hard to keep things in order without "regulations".
Or for your example - developers changing anything without a notice will bring you to serious problems.
Anyhow - you say "without using triggers".
Any specific reason for this?
If not - check out DDL Triggers. Such triggers are the easiest way to check if something happened.
And you can even log WHAT was going on.
Hopefully someone has a better solution than this, but I do this using a couple methods:
Have a "trunk" database, which is the current development version. All work is done here as it is being prepared to be included in a release.
Every time a release is done:
The last release's "clean" database is copied to the new one, eg, "DB_1.0.4_clean"
SQL-Compare is used to copy the changes from trunk to the 1.0.4_clean - this also allows checking exactly what gets included.
SQL Compare is used again to find the differences between the previous and new releases (changes from DB_1.0.4_clean to DB_1.0.3_clean), which creates a change script "1.0.3 to 1.0.4.sql".
We are still building the tool to automate this part, but the goal is that there is a table to track every version the database has been at, and if the change script was applied. The upgrade tool looks for the latest entry, then applies each upgrade script one-by-one and finally the DB is at the latest version.
I don't have this problem, but it would be trivial to protect the _clean databases from modification by other team members. Additionally, because I use SQL Compare after the fact to generate the change scripts, there is no need for developers to keep track of them as they go.
We actually did this for a while, and it was a HUGE pain. It was easy to forget, and at the same time, there were changes being done that didn't necessarily make it - so the full upgrade script created using the individually-created change scripts would sometimes add a field, then remove it, all in one release. This can obviously be pretty painful if there are index changes, etc.
The nice thing about SQL compare is the script it generates is in a transaction -and it if fails, it rolls the whole thing back. So if the production DB has been modified in some way, the upgrade will fail, and then the deployment team can actually use SQL Compare on the production DB against the _clean db, and manually fix the changes. We've only had to do this once or twice (damn customers).
The .SQL change scripts (generated by SQL Compare) get stored in our version control system (subversion).
If you have Visual Studio (specifically the Database edition), there is a Database Project that you can create and point it to a SQL Server database. The project will load the schema and basically offer you a lot of other features. It behaves just like a code project. It also offers you the advantage to script the entire table and contents so you can keep it under Subversion.
When you build the project, it validates that the database has integrity. It's quite smart.
On one of our projects we had stored database version inside database.
Each change to database structure was scripted into separate sql file which incremented database version besides all other changes. This was done by developer who changed db structure.
Deployment script checked against current db version and latest changes script and applied these sql scripts if necessary.
Firstly, your production database should either not be accessible to developers, or the developers (and everyone else) should be under strict instructions that no changes of any kind are made to production systems outside of a change-control system.
Change-control is vital in any system that you expect to work (Where there is >1 engineer involved in the entire system).
Each developer should have their own test system; if they want to make changes to that, they can, but system tesing should be done on a more controlled, system test system which has the same changes applied as production - if you don't do this, you can't rely on releases working because they're being tested in an incompatible environment.
When a change is made, the appropriate scripts should be created and tested to ensure that they apply cleanly on top of the current version, and that the rollback works*
*you are writing rollback scripts, right?
I agree with other posts that developers should not have permissions to change the production database. Either the developers should be sharing a common development database (and risk treading on each others' toes) or they should have their own individual databases. In the former case you can use a tool like SQL Compare to deploy to production. In the latter case, you need to periodically sync up the developer databases during the development lifecycle before promoting to production.
Here at Red Gate we are shortly going to release a new tool, SQL Source Control, designed to make this process a lot easier. We will integrate into SSMS and enable the adding and retrieving objects to and from source control at the click of a button. If you're interested in finding out more or signing up to our Early Access Program, please visit this page:
http://www.red-gate.com/Products/SQL_Source_Control/index.htm
I have to agree with the rest of the post. Database access restrictions would solve the issue on production. Then using a versioning tool like DBGhost or DVC would help you and the rest of the team to maintain the database versioning
I am wanting to transfer objects (tables, stored procedures, data etc) between two servers (Dev box and Live box) and was wondering what the best approach for doing this is?
In SQL Server 2000, you could transfer all objects and data between databases. Now all there is is 'copy data' and 'write a query'. Where has the second option gone?
Both databases are SQL 2005 (with service pack 2). When transferring, primary keys and relationships should be kept intact as well as all the views and other associated data with regards to ASP.NET authentication. Integration Services is not setup up on the live server, so that is not an option.
The only way I can think of is generating scripts, then running them on the other server, but that is more time consuming than the old way (this is how I am doing it now).
If you are willing to pay, I recommend Sql Compare and Sql Data Compare from Red Gate.
Very useful products.
Database Publishing Wizard
http://sqlhost.codeplex.com/
It's a shame you haven't got Integration Services installed as you could use the "Copy Database Wizard". I believe this creates an SSIS package that runs on the destination server.
If you have Visual Studio 2008, you could try the Data comparison and Schema comparison tools.
Your best bet is probably a schema & data comparison tool; there's various tools listed at http://www.mssqltips.com/tip.asp?tip=1069
You don't mention the scope of your application or the number of developers, etc., so it is a little hard to make any recommendations. However, if your development consists of multiple concurrent projects and multiple developers and you are copying from a Development to Production I would recommend something like the following:
implement 3 "areas": dev, qa, production.
develop all changes in dev, create all changes in scripts, use something like cvs or sourcesafe to track changes on all objects
when changes are ready and tested, run your scripts in qa, this will validate your scripts and install procedure
when ready run your scripts and install procedure on production
note: qa is almost identical to production, except applied changes waiting for their final production install. dev contains any work in progress changes, extra debug junk, etc. You can periodically restore a production backup onto qa and dev to resync them (just make sure all developers are aware of this and plan accordingly), because (depending on the number of developers) they (production vs. qa vs. dev) will start to incur more differences over time.