Data tier applications - Post Deployment - sql-server

This is such a simple thing that even asking here is making me feel stupid but since I have been stuck on this for long time, I will ask it here. I am working on a data-tier application in visual studio. I have usual things like tables, stored procs and some post deployment data. By default, data tier application comes with Scripts/Post-Deployment folder. Inside this folder there is a file called Script.PostDeployment.sql. Just to be little more organised, I am creating folders inside Post-Deployment as StaticData and TestData. My insert statements for data creating are locatied inside these folders. So, based on this structure, I am adding following code to my Script.PostDeployment.sql:
/*
Post-Deployment Script Template
--------------------------------------------------------------------------------------
This file contains SQL statements that will be appended to the build script.
Use SQLCMD syntax to include a file in the post-deployment script.
Example: :r .\myfile.sql
Use SQLCMD syntax to reference a variable in the post-deployment script.
Example: :setvar TableName MyTable
SELECT * FROM [$(TableName)]
--------------------------------------------------------------------------------------
*/
:r .\StaticData\States.sql
:r .\TestData\Logins.sql
The problem is the above code does not work. For some starnge reason, the deploy command just ignores the paths and looks for States.sql and Logins.sql in Scripts/Post-Deployment and not in appropriate subfolders. Anyone else encountered anything similar? Very simple issue, but taking me forever to get around this. I have tried my best to explain, but ask questions and I can try to make things clearer.
Thanks!

I took a look at your sample code. When I had tried to reproduce this, I was using a SQL 2008 database project in Visual Studio 2010, but what your project is a data-tier application, and that is very different; when I switched to using a data-tier application, I was able to reproduce what you're seeing.
Data-tier applications produce DAC packages that contain the definitions of objects and also contain user-defined scripts, like the pre and post deployment scripts. Now, I'm not 100% certain (I haven't used DAC packages before, so I'm basing this on observation and research), but I'm guessing that the file structure of the DAC package doesn't support sub-folders under the Script\Post-deployment folder; I am assuming it has a pretty strict folder structure internally. Consequently, the DACCompiler appears designed to strip out just the filenames from your file references in the post-deployment script, and it ignores the directory path.
There is a whitepaper on data-tier applications here. In it is a section on adding a post-deployment script to the package, and in that section are some best practices, including the following:
• When you work in Solution Explorer, it is recommended that you include all post-deployment commands in the Script.PostDeployment.sql script file. This is because only one post-deployment file is included in the DAC package. In other words, you should not create multiple files.
Now, technically, that's what the :r command does, but you may find it easier to just embed the commands directly into the file manually.
It's also possible that this is simply a bug in the design of the DACCompiler.
Here's what I recommend that you do:
For now, the easiest thing to do - I
believe - is just to move the scripts
up directly under the Post-Deployment
folder; give them unique, descriptive
names to compensate for not having
the subdirectories.
Alternatively, if you really want to
keep the subdirectories, add a
pre-build command to your project;
have it copy the scripts from the
subdirectories into the
post-deployment directory before the
build starts (you'll need to ensure
the scripts have unique filenames)
If you feel that this is a bug, or a
feature that should exist, go to
http://connect.microsoft.com/SQLServer
and recommend that the product team
address it in a future version of the
product. This is a great place to
make these kinds of recommendations,
because the feedback goes to the
product team, the user community at
large can vote on feedback to
increase its weight, and the product
team can communicate back to you with
information about the feedback.
And, of course, you could hold out and see if somebody else has a different answer, and if there is, great! But I'm guessing if nobody else has responded yet, then probably there isn't one; I certainly couldn't find anything in my digging.
I hope overall this information is helpful. I wish I could give you a way to have it work now, but I think your best bet is to work within the limitations of the current design and post feedback to Connect.
Good luck.

I have a feeling that this will be too late in the pipeline to help with your problem but it might be worth a look. The dacpac format is just a zip file that contains a series of xml files and SQL scripts. If you change the extension of the file to zip then you will be able to access the files that it contains. The postdeploy.sql file should contain the aggregation of your post deployment script and any others that it references.

I just tried this using Visual Studio 2013 and it works.
IF ( '$(DeployType)' = 'Qualification' )
BEGIN --Run scripts
PRINT 'Deploying Qualification Specific scripts.'
:r .\Qualification\"QualificationSpecificTestScript.sql"
END
ELSE IF ( '$(DeployType)' = 'Production' )
BEGIN --Run scripts
PRINT 'Deploying Production Specific scripts.'
:r .\Production\"ProductionSpecificTestScript.sql"
END
The contents QualificationSpecificTestScript.sql and ProductionSpecificScript.sql are inserted into the generated Post Deployment script.
Here is the generated script file (just the relevant section):
IF ( '$(DeployType)' = 'Qualification' )
BEGIN --Run scripts
PRINT 'Deploying Qualification Specific scripts.'
begin transaction;
PRINT 'IN QUALIFICATION ENVIRONMENT POST DEPLOYMENT SCRIPT'
commit transaction;
END
ELSE IF ( '$(DeployType)' = 'Production' )
BEGIN --Run scripts
PRINT 'Deploying Production Specific scripts.'
begin transaction;
PRINT 'IN PRODUCTION ENVIRONMENT POST DEPLOYMENT SCRIPT'
-- TODO: Confirm this record should be deleted
--DELETE TB_VariableName where Id = 9514
commit transaction;
END

Related

Including scripts in QueryBank / Saved Export queries?

Can I incorporate scripts setting variables and while loops, etc. in the "QB Query" of the Query Bank?
I have a SQL Server script that works perfectly in my local dev DB but it doesnt play nice with Volusion.
I don't know if I should spend more time figuring it out or just stop because it isn't even possible.
You can't do it in the Custom Queries / Query Bank area. I believe their system will stop executing the script when it encounters certain keywords or punctuation. A work around is to create a .sql file that contains your script and place it in your vspfiles/schema/Generic folder. You'll also need an .xsd file with the same name. The contents of the xsd file aren't very important - you can reuse the contents from an existing one (search their support pages for Developer Resources to find examples). Once the sql and xsd files are in place, you can execute the SQL in the .sql file by using the URL/API method, like this...
http://www.MYWEBSITE.com/net/WebService.aspx?Login=USER#MYWEBSITE.com&EncryptedPassword=XXXXXXXXXXXXXXXXX&EDI_Name=Generic\FILENAME (<-- minus the .sql extension)
you'll need to replace several things above of course. But this works well for us. One thing to note, if you automate the creation and execution of these files, it's slow/inefficient on their system and could slow your site down, depending on how often you do it.

Bulk Insert in Post Deployment Script

So, I have a script that uses bulk insert to pull text from files and insert their contents into a table. I am loading from text files because the text may be large and in doing this, I do not need to worry about escaping. I have the script working locally with a set defined directory. ex ('C:\Users\me\Files\File.txt') But, I need to run this script in a Post Deployment script. The text files that I am reading from are in the same Database project. I cannot do a set defined directory as the directory may be different depending on the different environments that the project is published to. Is there a way to get a relative path or get what the solution/project's directory is after deployment?
So, because Bulk Insert needs an absolute path, scripts have no concept of relative paths, and this will be deployed on multiple environments where I do not know the absolute path. I decided to utilize Powershell AND Bulk Insert. So, what I am doing is, on the Database project's Pre-Build, I call my Powershell script. The Powershell script is able to figure out its current directory. I build a SQL file that is called in the Post-Deployment script. In this SQL file, I Bulk Insert using the current directory.
Why not use BCP: http://msdn.microsoft.com/en-us/library/ms162802.aspx ? It can handle relative paths. And if you are able to all PowerShell, I don't see why you wouldn't be able to call BCP.EXE. And it is essentially the same API as BULK INSERT.
Have you considered using a standard location on the file system? When I need to write DOS/CMD scripts that are portable (including install stuff for later consumption via T-SQL, such as CREATE ASSEMBLY FROM), I do something like:
IF NOT EXIST C:\TEMP\MyInstallFolder (
MKDIR C:\TEMP\MyInstallFolder
)
REM put stuff into C:\TEMP\MyInstallFolder now that it is certain to be there)
REM CALL some process that looks in C:\TEMP\MyInstallFolder
The MKDIR will create all missing parent folders. So a folder like TEMP, which used to be standard on PCs running Windows, is typically not there anymore since they have moved to per-user temp folders but is created and then MyInstallFolder is created, causing no errors. The IF NOT EXIST will make sure that re-running the script will also not error after the first run.

Wordpress database migration

I've looked around the Wordpress forums about this and didn't find anything so I thought I might try here.
If you have a staging/dev Wordpress setup used for testing new pluging and such, how do you go about migrating the data in the staging database back to the production database? Is there a "Wordpress best practices" way to do this, or am I limited to having to manually migrate tables from one database to the other?
I have a script that mysqldumps a copy of my production Wordpress DB, restores it over my test Wordpress install & then corrects all the "production" settings & urls in the test DB.
Both my production & test databases live on the same server, but you could change the mysqldump settings to dump from a remote mysql server & restore to a local server quite easily.
Here are my scripts:
overwrite_test.coach_db_with_coache_db.sh
#!/bin/bash
dbUser="co*******"
dbPassword="*****"
dbSource="coach_production"
dbDest="coach_test"
tmpDumpFile="/tmp/$dbSource.sql"
mysqldump --add-drop-table --extended-insert --user=$dbUser --password=$dbPassword --routines --result-file=$tmpDumpFile $dbSource
mysql --user=$dbUser --password=$dbPassword $dbDest < $tmpDumpFile
mysql --user=$dbUser --password=$dbPassword $dbDest < /AdminScripts/change_coach_to_test.coach.sql
change_coach_to_test.coach.sql
-- Change all db references from #oldDomain to #newDomain
SET #oldDomain = 'coach.co.za';
SET #newDomain = 'test.coach.co.za';
SET #testUsersPassword = 'password';
UPDATE `wp_1_options` SET `option_value` = REPLACE(`option_value`,#oldDomain,#newDomain) WHERE `option_name` IN ('siteurl','home','fileupload_url');
UPDATE `wp_1_posts` SET `post_content` = REPLACE(`post_content`,#oldDomain,#newDomain);
UPDATE `wp_1_posts` SET `guid` = REPLACE(`guid`,#oldDomain,#newDomain);
UPDATE `wp_blogs` SET `domain` = #newDomain WHERE `domain` = #oldDomain;
UPDATE `wp_users` SET `user_pass` = MD5( #testUsersPassword );
-- Only valid for main wpmu site
UPDATE `wp_site` SET `domain` = #newDomain WHERE `domain` = #oldDomain;
Perhaps you are just looking for the wrong thing. Wouldn't a backup plugin handle this with ease? I know they exist for all the big CMS packages...
The two methods would be using the export/import feature under tools or copying the database. I email myself a copy of my production database weekly using the WordPress Database Backup plugin.
The import feature can be problematic for moving a wordpress blog as you have to configure your php.ini file often because the default value of files you can upload on a hosted php implementation tends to be too small by default.
I wanted to pull the database from my production wordpress website into an offline development copy of it on my desktop machine so I could modify the site and test it with a
full set of the existing blog content and history.
This proved to be problematic, as simply making an offline backup of the database and importing it into the local development database did not work.
Overcoming these problems in moving data from the production to the dev database can probably be used to go the other way as well - so I think you can just use these guidelines for what you want to do as well - just start with dev data and move it to prod.
The problems here were:
the permalink designations for the
blog posts are all stored in the
database as they would be for the
online version, but my offline copy
isn't at the domain address, instead
it is in the localhost directory. So
when I launch the site locally,
although the css formatting and
images are all in place (the image
links being relative), the actual
blog posts don't show up.
many of the links throughout the
site link back out to the internet,
so if you try to navigate to
archives, or comments, or
categories, or the main posts, you
get sent back out to the internet
instead of staying in the database
on the local machine.
To make sure I was doing this right, I blew away the wordpress install I had on my local machine and restarted from scratch.
Once I had a clean, new wordpress install and brand new default freshly created local database for it, I opened up the database in phpMyAdmin and took a look at the wp_posts
table. Inside there, each record (in other words, each post) has a column titled "guid", which shows the location of that post. For example, the first one in a fresh, default
install contains this "guid" value:
http://localhost/wordpress/?p=1
If you look in the wp_posts table of your online version, you'll see instead in this location the url to your site online.
You can't just import the tables wholesale into your local install, because you'll be importing all these outside references. It will make your local version impossible to navigate locally.
So, I created a backup copy of my online site's database and saved it locally as a .sql file. I then opened that file in a text editor (I used notepad++, a great piece of free software, but you could use any text editor). Things I needed to look out for:
For whatever reason, the tables on my
online site aren't just, for example,
"wp_posts" - they are
"wp_something_posts"... there are
some extra letters in there in the
table names.
Any references to http://... that contain my online url instead of localhost/wordpress
To keep it simple let's just do only the posts. In the backup copy of the .sql you've made of your online database, find the beginning of the wp_posts table. It will look something like this:
--
-- Table structure for table `wp_posts`
--
DROP TABLE IF EXISTS `wp_posts`;
CREATE TABLE `wp_posts` (
...and so on. Highlight everything above that up to just below the comment marking the beginning of the database at the top of the file (it will say -- Database: 'your database name') and delete it. Then go to the end of your wp_posts table, and delete everything after then end of it down to the bottom of the file. Now your file only contains your posts, and nothing else.
Save this as a separate document. Call it posts.sql or something like that.
Now, in this posts.sql file, you need to do two find/replaces actions.
Find every instance of the name of
the table wp_something_posts and
replace it with wp_posts. You only
need to do this if your backup copy
of your online database doesn't
match your clean local install as
far as the table names go. You want
whatever the table name is in this
file to match what your locally
installed wordpress database has as
this table name. If you don't make
these names match, you are just
going to end up importing the posts
into a new, differently named table,
which will be of no use to you at
all.
Find every instance of http://...
(replace the elipsis with your url)
and replace it with
http://localhost/wordpress (or
whatever the local url to your dev
version of the site is)
Now save this file again, to make sure you've got these changes set.
Now that you've done that, use phpMyAdmin to get into the wordpress database on your local machine, select the "import" tab and navigate the selector to the posts.sql file you just made, and then import it. This will pull all the data in that file into your local wp_posts table.
When that finishes, browse your local wordpress site. You'll see all your posts in there now. Hooray!
You may need to do something similar for a few other tables if you want to bring in your comments, tags, categories, and static pages you've created, etc.
I realize this is a convoluted process. There is probably a tool out there somewhere that makes this activity easier, and if someone knows of one I'd love to find out about it. If someone knows of a better way to do this manually than what I've described, I'd love to know that as well!
Until then, this is the way I figured out how to do it. Hopefully it helps get you going in the right direction.
You need to handle the serialized objects. Here is a client side HTML5 utility to handle it. Because it is all javascript, it's quite fast.
The alternative would be hooking a bash script into your deployment. So once the site is deployed, the db is backed up and deserialized with the new domain.
This about sums up the problems with the wordpress core architecture... but I wrote a plugin that solves the problems with domain names and absolute urls being stored in the database:
http://wordpress.org/extend/plugins/root-relative-urls/
This will solve the problems outlined by #oddbill. Though don't worry too much about the url being in the GUID column as that field is never used for link generation.
#markratledge provides a couple links to some lengthy documents that basically say this:
//export
mysqldump -u[username] -p[password] [database] > backup.sql
//import
mysql -u[username] -p[password] [database] < backup.sql
You'll want to exclude the comments/comments_meta tables if you push to production from staging so you don't lose all of your comments and trackbacks (#DavidLaing's approach will wipe those out.) And this assumes you only make content changes in your staging environment. If you want to make changes in production and your staging environment, you'll need to write scripts that sync the data instead of wholesale overwrite it... good luck on that task, may I suggest adding create & modified timestamp columns before you invest too much time with the current schema.
And finally, #RussellStuever's approach is suitable in most circumstances, just be sure to know when you are browsing your host mapped site versus your production site. And really be sure about it, because some browsers cache dns lookups for days until you physically close them and start a new process. At which point switching hosts may take some time, and switching frequently may get frustrating. And if you need to test with an iPhone, you'll need to publish the site live first, or use a good router that can remap outbound internet requests to local servers because you cannot modify hosts files on most mobile devices.
My plugin lets you develop and test from http://localhost/ or http://staging.server.local/ or http://www.production.com without any of the usual pitfalls. And then to migrate data, it's as simple as exporting and importing the data, no search & replace step or database setting tweaks necessary.
And don't rely on the import/export tool, it doesn't capture everything in typical wordpress installations, and still requires a needless search & replace step.

What are the best practices for database scripts under code control

We are currently reviewing how we store our database scripts (tables, procs, functions, views, data fixes) in subversion and I was wondering if there is any consensus as to what is the best approach?
Some of the factors we'd need to consider include:
Should we checkin 'Create' scripts or checkin incremental changes with 'Alter' scripts
How do we keep track of the state of the database for a given release
It should be easy to build a database from scratch for any given release version
Should a table exist in the database listing the scripts that have run against it, or the version of the database etc.
Obviously it's a pretty open ended question, so I'm keen to hear what people's experience has taught them.
After a few iterations, the approach we took was roughly like this:
One file per table and per stored procedure. Also separate files for other things like setting up database users, populating look-up tables with their data.
The file for a table starts with the CREATE command and a succession of ALTER commands added as the schema evolves. Each of these commands is bracketed in tests for whether the table or column already exists. This means each script can be run in an up-to-date database and won't change anything. It also means that for any old database, the script updates it to the latest schema. And for an empty database the CREATE script creates the table and the ALTER scripts are all skipped.
We also have a program (written in Python) that scans the directory full of scripts and assembles them in to one big script. It parses the SQL just enough to deduce dependencies between tables (based on foreign-key references) and order them appropriately. The result is a monster SQL script that gets the database up to spec in one go. The script-assembling program also calculates the MD5 hash of the input files, and uses that to update a version number that is written in to a special table in the last script in the list.
Barring accidents, the result is that the database script for a give version of the source code creates the schema this code was designed to interoperate with. It also means that there is a single (somewhat large) SQL script to give to the customer to build new databases or update existing ones. (This was important in this case because there would be many instances of the database, one for each of their customers.)
There is an interesting article at this link:
https://blog.codinghorror.com/get-your-database-under-version-control/
It advocates a baseline 'create' script followed by checking in 'alter' scripts and keeping a version table in the database.
The upgrade script option
Store each change in the database as a separate sql script. Store each group of changes in a numbered folder. Use a script to apply changes a folder at a time and record in the database which folders have been applied.
Pros:
Fully automated, testable upgrade path
Cons:
Hard to see full history of each individual element
Have to build a new database from scratch, going through all the versions
I tend to check in the initial create script. I then have a DbVersion table in my database and my code uses that to upgrade the database on initial connection if necessary. For example, if my database is at version 1 and my code is at version 3, my code will apply the ALTER statements to bring it to version 2, then to version 3. I use a simple fallthrough switch statement for this.
This has the advantage that when you deploy a new version of your application, it will automatically upgrade old databases and you never have to worry about the database being out of sync with the software. It also maintains a very visible change history.
This isn't a good idea for all software, but variations can be applied.
You could get some hints by reading how this is done with Ruby On Rails' migrations.
The best way to understand this is probably to just try it out yourself, and then inspecting the database manually.
Answers to each of your factors:
Store CREATE scripts. If you want to checkout version x.y.z then it'd be nice to simply run your create script to setup the database immediately. You could add ALTER scripts as well to go from the previous version to the next (e.g., you commit version 3 which contains a version 3 CREATE script and a version 2 → 3 alter script).
See the Rails migration solution. Basically they keep the table version number in the database, so you always know.
Use CREATE scripts.
Using version numbers would probably be the most generic solution — script names and paths can change over time.
My two cents!
We create a branch in Subversion and all of the database changes for the next release are scripted out and checked in. All scripts are repeatable so you can run them multiple times without error.
We also link the change scripts to issue items or bug ids so we can hold back a change set if needed. We then have an automated build process that looks at the issue items we are releasing and pulls the change scripts from Subversion and creates a single SQL script file with all of the changes sorted appropriately.
This single file is then used to promote the changes to the Test, QA and Production environments. The automated build process also creates database entries documenting the version (branch plus build id.) We think this is the best approach with enterprise developers. More details on how we do this can be found HERE
The create script option:
Use create scripts that will build you the latest version of the database from scratch, which is empty except the default lookup data.
Use standard version control techniques to store,branch,tag versions and view histories of your objects.
When upgrading a live database (where you don't want to loose data), create a blank second copy of the database at the new version and use a tool like red-gate's link text
Pros:
Changes to files are tracked in a standard source-code like manner
Cons:
Reliance on manual use of a 3rd party tool to do actual upgrades (no/little automation)
Our company checks them in simply because someone decided to put it in some SOX document that we do. It makes no sense to me at all, except possible as a reference document. I can't see a time we'd pull them out and try and use them again, and if we did we'd have to know which one ran first and which one to run after which. Backing up the database is much more important then keeping the Alter scripts.
for every release we need to give one update.sql file which contains all the new table scripts, alter statements, new/modified packages,roles,etc. This file is used to upgrade the database from 1 version to 2.
What ever we include in update.sql file above one all this statements need to go to individual respective files. like alter statement has to go to table as a new column (table script has to be modifed not Alter statement is added after create table script in the file) in the same way new tables, roles etc.
So whenever if user wants to upgrade he will use the first update.sql file to upgrade.
If he want to build from scrach then he will use the build.sql which already having all the above statements, it makes the database in sync.
sriRamulu
Sriramis4u#yahoo.com
In my case, I build a SH script for this work: https://github.com/reduardo7/db-version-updater
How is an open question
In my case I am trying to create something simple that is easy to use for developers and I do it under the following scheme
Things I tested:
File-based script handling in git using GitlabCI
It does not work, collisions are created and the Administration part has to be done by hand in case of disaster and the development part is too complicated
Use of permissions and access via mysql clients
There is no traceability on changes to the database and the transition to production is manual
Use of programs mentioned here
They require uploading the structures and many adaptations and usually you end up with change control just like the word
Repository usage
Could not control the DRP part
I could not properly control the backups
I don't think it is a good idea to have the backups on the same server and you generate high lasgs for the process
This was what worked best
Manage permissions per user and generate traceability of everything that is sent to the database
Multi platform
Use of development-Production-QA database
Always support before each modification
Manage an open repository for change control
Multi-server
Deactivate / Activate access to the web page or App through Endpoints
the initial project is in:
In case the comment manager reads this part, I understand the self-promotion but please just remove this part and leave the rest since I think it complies with the answer to the question reacted in the post ...
https://hub.docker.com/r/arelis/gitdb
I hope this reaches you since I see that several
There is an interesting article with new URL at: https://blog.codinghorror.com/get-your-database-under-version-control/
It a bit old but the concepts are still there. Good Read!

How to keep Stored Procedures and other scripts in SVN/Other repository?

Can anyone provide some real examples as to how best to keep script files for views, stored procedures and functions in a SVN (or other) repository.
Obviously one solution is to have the script files for all the different components in a directory or more somewhere and simply using TortoiseSVN or the like to keep them in SVN, Then whenever a change is to be made I load the script up in Management Studio etc. I don't really want this.
What I'd really prefer is some kind of batch script that I can run periodically (nightly?) that would export all the stored procedures / views etc that had changed in a given timeframe and then commit them to SVN.
Ideas?
Sounds like you're not wanting to use Revision Control properly, to me.
Obviously one solution is to have the
script files for all the different
components in a directory or more
somewhere and simply using TortoiseSVN
or the like to keep them in SVN
This is what should be done. You would have your local copy you are working on (Developing new, Tweaking old, etc) and as single components/procedures/etc get finished, you would commit them individually until you have to start the process over.
Committing half-done code just because it's been 'X' time since it was last committed is sloppy and guaranteed to cause anyone else using the repository grief.
I find it best to treat Stored Procedures just like any other compilable code: Code lives in the repository, you check it out to make changes and load it in your development tool to compile or deploy the code.
You can create a batch file and schedule it:
delete the contents of your scripts directory
using something like ExportSQLScript to export all objects to script/scripts
svn commit
Please note: That although you'll have the objects under source control, you'll not have the data or it's progression (is that a renamed field, or 1 new field and 1 deleted?).
This approach is fine for maintaining change history. But, of course, you should never be automatically committing to the "production build" (unless you like broken builds).
Although you didn't ask for it: This approach also won't produce a set of scripts that will upgrade a current DB. You'll only have initial creation scripts. Recording data progression and creation upgrade scripts is beyond basic source control systems.
I'd recommend Redgate SQL Compare for this - it allows you to compare database versions and generate change scripts - it's also fairly easily scriptable.
Based on your expanded question, you really want to use DDL triggers. Check out this article that details how to create a changelog system for your database.
Not sure on your price range, however DB Ghost could be an option for you.
I don't work for this company (or own the product) but in my researching of the same issue, this product looked quite promising.
I should've been a little more descriptive. The database in question is for an internal ERP system and thus we don't have many versions of our database, just Production/Testing/Development. When we've done a change request, some new fancy feature or something, we simply execute a script or series of scripts to update the procedures in question on the Testing database, if that is all good, then we do the same to Production.
So I'm not really after a full schema script per se, just something that can keep track of the various edits to the stored procedures over time. For example, PROCESS_INVOICE does stuff. It gets updated in some minor way in March. Some time later in say May it is discovered that in a rare case customers get double invoiced (or some other crazy corner case). I'd like to be able to see what has happened over time to this procedure. Currently the way the development environment is setup here I don't have that, which I'm trying to change.
I can recommend DBPro which is part of Visual Studio Team Edition. Have been using it for a few months for storing all parts of the database in Team Foundation Server as well as for deployment and database compares, etc.
Of course, as someone else mentioned, it does depend on your environment and price range.
I wrote a utility for dumping all of the relevant parts of my db into a directory structure that I use SVN on. I never got around to trying to incorporate it into the Manager but, if you're interested, it's here: http://www.reluctantdba.com/dbas-and-programmers/sqltools/svnforsql2005.aspx
It's free and, since I regularly run it, you know any bugs get fixed quickly.
You can always try integrating SourceSafe with SQL Server. Here's a quick start : link . To work with it you've got to have Managment Studio Developers Edition.

Resources