Migration using model first approach in entity framework - database

I have setup a system where I have taken the model first approach as it made more logical sense for me. Now when even I have some changes in the model currently what I do is -
Use the Generate database from model feature of entity framework. I create a dummy database and apply those scripts. which deletes all my data and tables first and then updates the database with the latest sql file which is generated by entity framework.
Now I use the Visual Studio's schema compare feature and generate migration scripts for my local database and also for the one which is in production.
I go through the scripts manually and verify them. Once that is done I run the migration scripts on the production instances.
Question : The main problem is that is really tedious and since I do it from my local system, connecting to my prod databases is very slow and sometimes my visual studio also crashes. Is there a more cleaner approach to do this? Which is more automated such that my laptop is not really responsible for the database migrations on the production instances?

You can try Database Migration Power Pack - it allows creating change scripts instead of full database scripts but on behind it does the same procedure as you did by hand. The problem is that mentioned tool will not work with EF5.
Unfortunately EF migrations currently don't support models created through EDMX. Migrations support only code first approach at the moment.

In a Schema First design I use ApexSQL Diff (quite likely very similar to RedGate's product, perhaps a bit cheaper) - a good 3rd party tool is much easier to use than a VS Database Project and is easy to apply with a script-application tool like RoundHousE.
Using it in a Model First approach can follow the Schema First approach using a cycle of Model‑Schema‑Diff‑Schema‑Model as described in the post; consider these guidelines/notes below to make for a streamlined process. The schema-diff approach does not need to be tedious, slow, or excessively manual.
The current version of the database schema is obtained by applying a sequence of database patches (or DDL/DML scripts).
A tool (we use RoundHousE) automatically applies the scripts, as needed. It records information to know which scripts have been applied. Applying the same scripts is idempotent.
Diff done against a local database; this local database can be built up from all the previous change scripts in an automated fashion. This latest-local is always the diff target for the latest model changes.
The remote/live database is never used as a diff target. The same scripts can be applied later to the test (and then live) databases. Since everything is done the same way then the process is repeatable on all databases.
The only "issue" is that an update that is not well thought out may lead to data that is invalid under new restrictions/constraints. Of course, this was easy to identify, fix, and re-diff before pushing to the live database.
Once a diff is committed to source control it must be applied on the branch. To "undo" a previously commit change-script requires creating a new diff applying an inverse action. There is no implicit down-version.
We have a [Hg] model branch that affectively acts as a schema lock that that must be unified against; this could be viewed as a weak point, but it has worked well with small-team development.
A tool like Huagati DBML/EDMX is used to synchronize the Schema back to the Model which is really useful when developing. This little gem really pays for itself and is part of the cycle. When this is employed it's easy to also "update to a model" or make Schema changes in SSMS (or whatever) and then bring them back over.
The Code First migrations are "OK" (and definitely better than naught!), but I'm only using them because Azure SQL (aka SQL Database) is not supported by advanced diff tooling due to not exposing various sys information. (The diffs can be done locally as per normal, but ApexSQL Diff generates DDL/DML that is not always friendly with Azure SQL - plus, it's a chance for me to learn a slightly different approach :-)
Some advantages of Code First migrations via the Power Pack: can perform update tasks in C# instead of being limited to the DDL/DML (can be convenient), automatic downgrades (although I question their use), do not need to purchase a 3rd party tool (can be expensive), easier integration/deployment to Azure SQL, less tied to a specific database vendor (in theory), etc.
While Code First migrations (and automation of such) are a good step forward vs. the absolutely horrid Drop-and-Recreate approach, I much prefer dedicate SQL tooling when developing.

Related

Given I have to write the migration scripts myself, what value does Flyway provide?

In my situation I use a tool that generates SQL statements to contain all database init/create statements. How does Flyway provide value beyond what my tool provides? Why should I care to write hand-coded migration scripts to use Flyway?
The question above mixes two things that should be separate: the concept of database creation mixed with the concept of migration.
database creation
Given a complete database and an empty database, you can use many tools to generate the scripts needed to recreate the complete database where nothing exists. In Flyway terms, you just creating a baseline. This isn't the concept of migration at all. Of course, given a V2.0 database, you could see any V1.0 database, blow it away, and install the V2.0 database, but now you've lost your data.
migration
Given a complete database V2.0 and a V1.0 older database, and you want to make the V1.0 database be "upgraded" to the V2.0. In the database world, this is called a migration because the existing 1.0 data needs to be re-arranged in a way that it works on V2.0. Now you need a script that not only creates/alters tables, you need a script that does some ETL (extract data, transform the data to be able to load into the new table structures, alter the old database to the new table structures, then load the data into the database). This may or may not be trivial, depending. You build the script to do it, Flyway will manage executing that script.
Flyway
Flyway enables the following:
Migration scripts become part of the software asset. They are versioned so that baseline/migration scripts can be maintained in source control in a way that migration becomes a repeatable feature as opposed to "one off" scripting work.
Flyway maintains a meta table in each database it works with so it knows what scripts have been applied
Flyway can apply migration in a completely automated way that removes manual execution errors
Flyway enables the creation of migration scripts as part of development (like Test Driven Development makes unit test creation an integral part of development) so that all your database development is captured in the form of migration scripts (rather than building migration scripts as needed as part of "one off" migrations.
It's common when using Flyway to update any previous version of your application in seconds via a single command. It becomes so easy that the stress of migration from an old DB to a new version goes away and now, evolution of the DB becomes easy and usual.
To use Flyway well, it requires changing your workflow: every time develop a change in your developer DB, put the change into a migration script so you can execute those changes against all the older DB versions that exist in the world. And those scripts are checked into your application's source code making migration a first class citizen of your software asset just like any other functionality.
It depends very much on your use case,
If you plan to write a simple application with an database structure that will remain static over the lifetime of the application it will add very little value.
If the project is expected to have a dynamic design over its lifetime with changes taking place on the schema Flyway provides a formal structure in which the changes maybe expressed and viewed. This formal structure can also be very helpful if you end up with a larger team working on the project as Flyway can then become part of the framework to handle things like multi-schema CI work.
One key thing is that you do not have to start with Flyway, you can added it at a later point, normally with limited retooling as the schema at that point in time will just become your baseline to which all future changes can be added.

What is the State of the Art for deploying database updates to production databases?

Every shop at which I've worked has had their own cobbled-together, haphazard, poorly understood and poorly maintained method for updating production databases.
I've never seen a consistent method for doing this.
So, in the most recent versions of SQL Server, what is the best practice for updating schema changes and migrating data from a development or test server to a production server?
Is there a 3rd party tool which handles this painlessly?
I'd imagine the ultimate tool would be able to
detect schema changes between two DBs and generate DDL to update one to the other.
include the ability to have custom code which performs custom data migration steps
allow versioning so a v1 db could be updated all the way to a v99 database, running all scripts and migration steps in order.
The three things I've used are:
For schemas
Visual Studio Database Projects. Meh. They are okay but you still have to do alot of the work yourself.
Red Gate's SQL Compare and the entire SQL Toolbelt. They've worked pretty hard to make this something you can version control. In practice I've found with databases you are usually trying to get from point A in the version timeline to point B. With binaries, you often just clobber whatever is there with point B (an oversimplification I know, but often true).
http://www.red-gate.com/
xSQL is a good place to start if your system is small and perhaps will remain small:
http://www.xsqlsoftware.com/LiteEdition.aspx
I don't work for or know anyone who works for or get any money from these people. Just telling you what I've done in the past.
For data
Red Gate has SQL Data Compare.
However, if you want something "free" (or included with SQL Server)
I've actually had a lot of success just using BCP and writing a small system that injects and extracts data. Generally when I find myself doing this I ask myself, "Why? If I am changing data, does that mean I am really changing something that is configuration? Can I use a different method here?" But sometimes you can't (maybe it's a legacy system where the original devs thought databases are for everything).
The problem with BCP extracts is they don't version control very well. There are tricks I've used like extracting in character mode and stuffing an order by in the extract query to try and pull rows out in an order that makes them somewhat more palatable for version control.
For small Projects I have used RedGate to manage schema and data migrations with alot of success. Very easy to use works for most cases.
For larger enterprise systems for Schema and data changes normally you save all the SQL scripts as text files and run them. We also include a Rollback script to run incase something goes wrong during the migration. Run this on UAT server then Test/staging/pre prod server then on Production. Saving a copy of all these files plus their roll back scripts should allow you to move from multiple versions of a DB.
There is also http://code.google.com/p/migratordotnet/ if your using .NET it allows you to define these scripts in CODE. Very usesful if you want to deploy across multiple DBs in an automated way. Makes it easy to say set my DB to version 23. Or revert my DB to version 5. etc. Works for schema and data, but I would only really use it for a few lines of data.
First you have to think that the requirements between scenarios vary a lot:
Customers purchase v1 of the product at Costco and install it in they home office or small business. When v2 comes out, customer purchases a box of the product and installs it on a new computer. It exports the data from the v1 installation and imports it into v2 installation. Even though behind the scenes both v1 and v2 use a SQL Express instance there is no supported upgrade. Schema changes on the deployed databases are not expected (hidden database, non technical user) and definitely not supported. The only 'upgrade' path supported is an explicit export/import, which probably uses an XML file or something similar.
A business purchases v1 of the product with a support contract. It installs it on its department SQL Server instance, from where the data is accessed by the purchased product and by many more integration services, reports etc. When v2 is released, the customer runs the prescribed upgrade procedure, if it runs into problems it calls the product vendor customer support line which walks the customer through some specific steps for his deployment. Database schema customizations are expected and often supported, including upgrade scenarios, but the schema changes are done by the customer (not known at v2 design time).
A web startup has database that backs the site. Developers make changes on their personal instances and check in changes. Automated build deployment with contiguous integration picks up the changes and deploys them against a test instance, and run build validation tests. The main branch build can be, at any moment, deployed into production. Production is the one database that backs the site. The structure of the production database is documented and understood 100%, every single change to the production database schema occurs through the build system and QA process. On a side note, this is the scenarios most SO users that ask your question have in mind, minus the part about '100% documented and understood'. I give the example of WWW backing site, but deplyment can really be anything. The gist of it is that there is only one production database (it may include HA/DR copies, and it may consist of multiple actual SQL Server databases), and is the only database that has to be upgraded.
A succesfull web startup. Same as above, but the production database has 5TB of data and 5 minutes of downtime make the CNN headlines. Schema changes may involve setting up replicas and copying data into new schemas with contiguous updates, followed by an online switch of operations to the replica. Schema changes are designed by MCM experts and deployn a schema change can be a multi-week process.
I can go on wit more scenarios. The point is that the requirement of each of these cases are so vastly different, that no 'state of the art' can answer all of them. Some scenarios will be perfectly OK with a schema diff deployment tool like vsdbcmd or SQL Compare. Other scenarios will be much better faced with explicit versioning scripts. Other might have such specific requirements (eg. 0 downtime) that each upgrade is a project on its own and has to be specifically custom tailored.
One thing is clear though across all scenarios: if your shop threats the development database MDF file* as 'source' and makes changes to it using the management tools, that is always a major #fail. All changes should be captured explicitly as some sort of source control artifact, and this is why I favor most the explicit version scripts, as in Version Control and your Database. But I recon that the VSDB project support for compile time schema validation and its ease of refactoring schema objects make a pretty powerful proposition and VSDB schema compare deployment may be OK.
Another important approache that has to be addressed is the code first schema modeling from tools like EF or LinqToSql. It works brilliantly to deploy v1, but fails miserably at any subsequent version. I strongly discourage these approaches.
But to sum up and answer in brief: as today, the state of the art sucks.
At Red Gate we'd recommend one of two approaches depending on your requirements and how formal you need your processes to be. If you have a development database and simply want to push changes to production, SQL Compare is the tool for the job. A level of versioning can be achieved by using the schema snapshots.
However, if you wants full source control benefits, such as team collaboration, sandboxed environments, audit trail, compliance, history, rollback, etc, you should consider SQL Source Control. This links development databases to Team Foundation Server or Subversion.

Databases and DVCS

I've recently asked a question about how suitable a DVCS is for the corporate environment, and that has sparked another question for me.
One of the plus sides to a DVCS seems to be that you can easily branch and try out new things. My problem starts when I begin to think about database changes. I've always found it tricky to get a DB into a VCS and it just sounds like it's going to be even harder with a DVCS.
So, whats the best way to work with databases and a DVCS?
EDIT: I've started looking into Migrator.NET. What do people think of projects like this for easily moving between versions specificaly with experimental branches in your DVCS?
I think the best way to deal with this issue is to work with DB Schemas, not the databases themselves. In this case, each developer would have their own database to develop against.
Here are some of the options available:
Migrations framework within Ruby on Rails.
South for Django, in addition to the schema being defined in the model classes themselves.
Visual Studio 2008 Team System Database Edition for .NET: You define the schema and the tool can do a diff on schema and data to generate scripts to go between different versions of the database.
These may give you some inspiration on how to deal with putting a database in version control. Another benefit that comes when you deal schemas is that you can more readily implement TDD and Continuous Integration (CI). Your TDD/CI environment would be able to build up a new version of the database and then run tests against the newly generated environment.
Version all the scripts you're using to manage your database. If you need to have "in-development" changes to a DB, make them on your personal DB until such time as you "publish" your changes.
Database version control is always the most difficult thing in a multi-developer environment.
Typically each user will have their own DB which is a chimera of some but not all of the DB changes. When they make changes, they'll need to commit their change scripts. This gets really awkward. The core problems seem to stem from database changes affecting many aspects of the system and multiple table changes being dependent on each other - and how to migrate to the new schema from the old schema. Migrating data to a new schema is typically non-trivial. Often you want to default a column when data is copied to the new schema, but NOT default a column in general for INSERT, say. These are typically already difficult in production deployment issues and having to manage the database during development when the database design could be in major flux in the same way as a major deployment is a lot more work than you usually need to be doing in development. Time that could be better spent ensuring that your database is well-designed - constraints, foregin keys, etc.
Because the developers are more likely to step on each other with database changes, we always had a database chokepoint - the developers all developed against the SAME development database and made their changes "live". Then the dev database was version controlled independently. This is not really easy when people are offsite or whatever. Another alternative is to have designated database developers who coordinate changes several developers need to the same table - that doesn't need to be their entire job, but gives you better DB design consistency. Or you can coordinate database revisions so that people become more aware of the DB revs other people are doing and time their changes to wait until a DB rev is available from another developer.
The best way to not put database into VCS in binary form. Period.
If you have text representation of your database and you have special merge tool to resolve conflicts when your database will be changed in different branches -- then you can start thinking about versioning databases. Otherwise it will be constant pain in the ass.

Testing and Managing database versions against code versions

As you develop an application database changes inevitably pop up. The trick I find is keeping your database build in step with your code. In the past I have added a build step that executed SQL scripts against the target database but that is dangerous in so much as you could inadvertanly add bogus data or worse.
My question is what are the tips and tricks to keep the database in step with the code? What about when you roll back the code? Branching?
Version numbers embedded in the database are helpful. You have two choices, embedding values into a table (allows versioning multiple items) that can be queried, or having an explictly named object (such as a table or somesuch) you can test for.
When you release to production, do you have a rollback plan in the event of unexpected catastrophe? If you do, is it the application of a schema rollback script? Use your rollback script to rollback the database to a previous code version.
You should be able to create your database from scratch into a known state.
While being able to do so is helpful (especially in the early stages of a new project), many (most?) databases will quickly become far too large for that to be possible. Also, if you have any BLOBs then you're going to have problems generating SQL scripts for your entire database.
I've definitely been interested in some sort of DB versioning system, but I haven't found anything yet. So, instead of a solution, you'll get my vote. :-P
You really do want to be able to take a clean machine, get the latest version from source control, build in one step, and run all tests in one step. Making this fast makes you produce good software faster.
Just like external libraries, database configuration must also be in source control.
Note that I'm not saying that all your live database content should be in the same source control, just enough to get to a clean state. (Do back up your database content, though!)
Define your schema objects and your reference data in version-controlled text files. For example, you can define the schema in Torque format, and the data in DBUnit format (both use XML). You can then use tools (we wrote our own) to generate the DDL and DML that take you from one version of your app to another. Our tool can take as input either (a) the previous version's schema & data XML files or (b) an existing database, so you are always able to get a database of any state into the correct state.
I like the way that Django does it. You build models and the when you run a syncdb it applies the models that you have created. If you add a model you just need to run syncdb again. This would be easy to have your build script do every time you made a push.
The problem comes when you need to alter a table that is already made. I do not think that syncdb handles that. That would require you to go in and manually add the table and also add a property to the model. You would probably want to version that alter statement. The models would always be under version control though, so if you needed to you could get a db schema up and running on a new box without running the sql scripts. Another problem with this is keeping track of static data that you always want in the db.
Rails migration scripts are pretty nice too.
A DB versioning system would be great, but I don't really know of such a thing.
While being able to do so is helpful (especially in the early stages of a new project), many (most?) databases will quickly become far too large for that to be possible. Also, if you have any BLOBs then you're going to have problems generating SQL scripts for your entire database.
Backups and compression can help you there. Sorry - there's no excuse not to be able to get a a good set of data to develop against. Even if it's just a sub-set.
Put your database developments under version control. I recommend to have a look at neXtep designer :
http://www.nextep-softwares.com/wiki
It is a free GPL product which offers a brand new approach to database development and deployment by connecting version information with a SQL generation engine which could automatically compute any upgrade script you need to upgrade any version of your database into another. Any existing database could be version controlled by a reverse synchronization.
It currently supports Oracle, MySql and PostgreSql. DB2 support is under development. It is a full-featured database development environment where you always work on version-controlled elements from a repository. You can publish your updates by simple synchronization during development and you can generate exportable database deliveries which you will be able to execute on any targetted database through a standalone installer which validates the versions, performs structural checks and applies the upgrade scripts.
The IDE also offers you SQL editors, dependency management, support for modular database model components, data model diagrams, SQL clients and much more.
All the documentation and concepts could be found in the wiki.

Deploying SQL Server Databases from Test to Live

I wonder how you guys manage deployment of a database between 2 SQL Servers, specifically SQL Server 2005.
Now, there is a development and a live one. As this should be part of a buildscript (standard windows batch, even do with current complexity of those scripts, i might switch to PowerShell or so later), Enterprise Manager/Management Studio Express do not count.
Would you just copy the .mdf File and attach it? I am always a bit careful when working with binary data, as this seems to be a compatiblity issue (even though development and live should run the same version of the server at all time).
Or - given the lack of "EXPLAIN CREATE TABLE" in T-SQL - do you do something that exports an existing database into SQL-Scripts which you can run on the target server? If yes, is there a tool that can automatically dump a given Database into SQL Queries and that runs off the command line? (Again, Enterprise Manager/Management Studio Express do not count).
And lastly - given the fact that the live database already contains data, the deployment may not involve creating all tables but rather checking the difference in structure and ALTER TABLE the live ones instead, which may also need data verification/conversion when existing fields change.
Now, i hear a lot of great stuff about the Red Gate products, but for hobby projects, the price is a bit steep.
So, what are you using to automatically deploy SQL Server Databases from Test to Live?
I've taken to hand-coding all of my DDL (creates/alter/delete) statements, adding them to my .sln as text files, and using normal versioning (using subversion, but any revision control should work). This way, I not only get the benefit of versioning, but updating live from dev/stage is the same process for code and database - tags, branches and so on work all the same.
Otherwise, I agree redgate is expensive if you don't have a company buying it for you. If you can get a company to buy it for you though, it really is worth it!
For my projects I alternate between SQL Compare from REd Gate and the Database Publishing Wizard from Microsoft which you can download free
here.
The Wizard isn't as slick as SQL Compare or SQL Data Compare but it does the trick. One issue is that the scripts it generates may need some rearranging and/or editing to flow in one shot.
On the up side, it can move your schema and data which isn't bad for a free tool.
Don't forget Microsoft's solution to the problem: Visual Studio 2008 Database Edition. Includes tools for deploying changes to databases, producing a diff between databases for schema and/or data changes, unit tests, test data generation.
It's pretty expensive but I used the trial edition for a while and thought it was brilliant. It makes the database as easy to work with as any other piece of code.
Like Rob Allen, I use SQL Compare / Data Compare by Redgate. I also use the Database publishing wizard by Microsoft. I also have a console app I wrote in C# that takes a sql script and runs it on a server. This way you can run large scripts with 'GO' commands in it from a command line or in a batch script.
I use Microsoft.SqlServer.BatchParser.dll and Microsoft.SqlServer.ConnectionInfo.dll libraries in the console application.
I work the same way Karl does, by keeping all of my SQL scripts for creating and altering tables in a text file that I keep in source control. In fact, to avoid the problem of having to have a script examine the live database to determine what ALTERs to run, I usually work like this:
On the first version, I place everything during testing into one SQL script, and treat all tables as a CREATE. This means I end up dropping and readding tables a lot during testing, but that's not a big deal early into the project (since I'm usually hacking the data I'm using at that point anyway).
On all subsequent versions, I do two things: I make a new text file to hold the upgrade SQL scripts, that contain just the ALTERs for that version. And I make the changes to the original, create a fresh database script as well. This way an upgrade just runs the upgrade script, but if we have to recreate the DB we don't need to run 100 scripts to get there.
Depending on how I'm deploying the DB changes, I'll also usually put a version table in the DB that holds the version of the DB. Then, rather than make any human decisions about which scripts to run, whatever code I have running the create/upgrade scripts uses the version to determine what to run.
The one thing this will not do is help if part of what you're moving from test to production is data, but if you want to manage structure and not pay for a nice, but expensive DB management package, is really not very difficult. I've also found it's a pretty good way of keeping mental track of your DB.
If you have a company buying it, Toad from Quest Software has this kind of management functionality built in. It's basically a two-click operation to compare two schemas and generate a sync script from one to the other.
They have editions for most of the popular databases, including of course Sql Server.
I agree that scripting everything is the best way to go and is what I advocate at work. You should script everything from DB and object creation to populating your lookup tables.
Anything you do in UI only won't translate (especially for changes... not so much for first deployments) and will end up requiring a tools like what Redgate offers.
Using SMO/DMO, it isn't too difficult to generate a script of your schema. Data is a little more fun, but still doable.
In general, I take "Script It" approach, but you might want to consider something along these lines:
Distinguish between Development and Staging, such that you can Develop with a subset of data ... this I would create a tool to simply pull down some production data, or generate fake data where security is concerned.
For team development, each change to the database will have to be coordinated amongst your team members. Schema and data changes can be intermingled, but a single script should enable a given feature. Once all your features are ready, you bundle these up in a single SQL file and run that against a restore of production.
Once your staging has cleared acceptance, you run the single SQL file again on the production machine.
I have used the Red Gate tools and they are great tools, but if you can't afford it, building the tools and working this way isn't too far from the ideal.
I'm using Subsonic's migrations mechanism so I just have a dll with classes in squential order that have 2 methods, up and down. There is a continuous integration/build script hook into nant, so that I can automate the upgrading of my database.
Its not the best thign in the world, but it beats writing DDL.
RedGate SqlCompare is a way to go in my opinion. We do DB deployment on a regular basis and since I started using that tool I have never looked back.
Very intuitive interface and saves a lot of time in the end.
The Pro version will take care of scripting for the source control integration as well.
I also maintain scripts for all my objects and data. For deploying I wrote this free utility - http://www.sqldart.com. It'll let you reorder your script files and will run the whole lot within a transaction.
I agree with keeping everything in source control and manually scripting all changes. Changes to the schema for a single release go into a script file created specifically for that release. All stored procs, views, etc should go into individual files and treated just like .cs or .aspx as far as source control goes. I use a powershell script to generate one big .sql file for updating the programmability stuff.
I don't like automating the application of schema changes, like new tables, new columns, etc. When doing a production release, I like to go through the change script command by command to make sure each one works as expected. There's nothing worse than running a big change script on production and getting errors because you forgot some little detail that didn't present itself in development.
I have also learned that indexes need to be treated just like code files and put into source control.
And you should definitely have more than 2 databases - dev and live. You should have a dev database that everybody uses for daily dev tasks. Then a staging database that mimics production and is used to do your integration testing. Then maybe a complete recent copy of production (restored from a full backup), if that is feasible, so your last round of installation testing goes against something that is as close to the real thing as possible.
I do all my database creation as DDL and then wrap that DDL into a schema maintainence class. I may do various things to create the DDL in the first place but fundamentally I do all the schema maint in code. This also means that if one needs to do non DDL things that don't map well to SQL you can write procedural logic and run it between lumps of DDL/DML.
My dbs then have a table which defines the current version so one can code a relatively straightforward set of tests:
Does the DB exist? If not create it.
Is the DB the current version? If not then run the methods, in sequence, that bring the schema up to date (you may want to prompt the user to confirm and - ideally - do backups at this point).
For a single user app I just run this in place, for a web app we currently to lock the user out if the versions don't match and have a stand alone schema maint app we run. For multi-user it will depend on the particular environment.
The advantage? Well I have a very high level of confidence that the schema for the apps that use this methodology is consistent across all instances of those applications. Its not perfect, there are issues, but it works...
There are some issues when developing in a team environment but that's more or less a given anyway!
Murph
I'm currently working the same thing to you. Not only deploying SQL Server databases from test to live but also include the whole process from Local -> Integration -> Test -> Production. So what can make me easily everyday is I do NAnt task with Red-Gate SQL Compare. I'm not working for RedGate but I have to say it is good choice.

Resources