SSDT implementation: Alter table insteed of Create - sql-server

We just trying to implement SSDT in our project.
We have lots of clients for one of our products which is built on a single DB (DBDB) with tables and stored procedures only.
We created one SSDT project for database DBDB (using VS 2012 > SQL Server object Browser > right click on project > New Project).
Once we build that project it creates one .sql file.
Problem: if we run that file on client's DBDB - it creates all the tables again & it deletes all records in it [this fulfills the requirements but deletes the existing records :-( ]
What we need: only the update which is not present on the client's DBDB should get update with new changes.
Note : we have no direct access to client's DBDB database for comparing with our latest DBDB. We only can send them some magic script file which will update their DBDB to the latest state.

The only way to update the Client's DB is to compare the DB schemas and then apply the delta. Any way you do it, you will need some way to get a hold on the schema thats running at the client:
IF you ship a versioned product, it is easiest to deploy version N-1 of that to your development server and compare that to the version N you are going to ship. This way, SSDT can generate the migration script you need to ship to the client to pull that DB up to the current schema.
IF you don't have a versioned product, or your client might have altered the schema or you will need to find a way to extract the schema data on site (maybe using SSDT there) and then let SSDT create the delta.
Option: You can skip using the compare feature of SSDT altogether. But then you need to write your migration script yourself. For each modification to the schema, you need to write the DDL statements yourself and wrap them in if clauses that check for the old state so the changes will only be made once and if the old state exists. This way, it doesnt really matter from wich state to wich state you are going as the script will determine for each step if and what to do.
The last is the most flexible, but requires deep testing in its own and of course should have started way before the situation you are in now, where you don't know what the changes have been anymore. But it can help for next time.
This only applies to schema changes on the tables, because you can always fall back to just drop and recreate ALL stored procedures since there is nothing lost in dropping them.

It sounds like you may not be pushing the changes correctly. You have a couple of options if you've built a SQL Project.
Give them the dacpac and have them use SQLPackage to update their own database.
Generate an update script against your customer's "current" version and give that to them.
In any case, it sounds like your publish option might be set to drop and recreate the database each time. I've written quite a few articles on SSDT SQL Projects and getting started that might be helpful here: http://schottsql.blogspot.com/2013/10/all-ssdt-articles.html

Related

How to run raw SQL to deploy database changes

We intend to create DACPAC files using SQL database projects and distribute them automatically to several environments, DEV/QA/PROD, using Azure Pipeline. I can make changes to the schema for a table, view, function, or procedure, but I'm not sure how we can update specific data in a table. I am sure this is very common use case but unfortunately I am having hard time implementing it.
Any idea how can I automate creating/updating/deleting a row for a table?
E.g.: update myTable set myColumn = 5 where someColumn = 'condition'
In your database project you can add a Post Deployment Script
Do not. Seriously. I found DACPAC always to be WAY too limiting for serious operations. Look how the SQL is generated and - realize how little control you have.
The standard approach is to have deployment scripts that you generate and that do the changes in the database, plus a table in the db tracking which have executed (possibly with a checksum so you do not need t change the name to update them).
You can easily generate them partially by schema compare (and then generate the change script), but those also allow you to do things like data scrubbing and multi step transformations that DACPAC by design cannot efficiently and easily do.
There are plenty of frameworks for this around. They generally belong in the category of developer tools.

VSTS build - Incremental database deployment in distributed environment

I have a sql server database working with a .net 2015 mvc 5 application. My database code is source controlled using SSDT project. I am using SqlPackage.exe to deploy database to the staging environment using .Decpac file created by the SSDT project build process. This has been done using a powershell task of VSTS build.
This way I can make db schema changes in a source controlled way. Now the problem is regarding the master data insertion for the database.
I use a sql script file which have data insertion scripts which is executed as a post deployment script. This file is also source controlled.
The problem is that initially we have prepared the insertion script to target a sprint ( taking sprint n as a base) which works well for first release. but in next sprint if update some master data then how should the master data insert should be updated:
Add new update / insert query at the last of the script file? but in this case the post deployment script will be execute by CI and it try to insert the data again and again in the subsequent builds which will eventually get failed if we have made some schema changes in the master tables of this database.
Update the existing insert queries in the data insertion script. in this case also we have trouble because at the post build event, whole data will be re-inserted.
Maintain separate data insertion scripts for each script and update the script reference to the new file for the post build event of SSDT. This approach has a manual effort and error pron because the developer has to remember this process. Also the other problem with this approach is if we need to setup 1 more database server in the distributed server farm. Multiple data insertion script will throw errors because SSDT has latest schema and it will create a database with the same. but older data scripts has data insertion for previous schema ( sprint wise db schema which was changed in later sprints)
So can anyone suggest best approach which have lesser manual effort but it can cover all the above cases.
Thanks
Rupendra
Make sure your pre- and post-deployment scripts are always idempotent. However you want to implement that is up to you. The scripts should be able to be run any number of times and always produce correct results.
So if your schema changes that would affect the deployment scripts, well, updating the scripts is a dependency of the changes and accompanies it in source control.
Versioning of your database is already a built in feature of SSDT. In the project file itself, there is a node for the version. And there is a whole slew of versioning build tasks in VSTS you can use for free to version it as well. When SqlPackage.exe publishes your project with the database version already set, a record is updated in msdb.dbo.sysdac_instances. It is so much easier than trying to manage, update, etc. your own home-grown version solution. And you're not cluttering up your application's database with tables and other objects not related to the application itself.
I agree with keeping sprint information out of the mix.
In our projects, I label source on successful builds with the build number, which of course creates a point in time marker in source that is linked to a specific build.
I would suggest to use MERGE statements instead of insert. This way you are protected for duplicated inserts within a sprint scope.
Next thing is how to distinguish different inserts for different sprints. I would suggest to implement version numbering to sync database with the sprints. So create a table DbVersion(version int).
Then in post deployment script do something like this:
SET #version = SELECT ISNULL(MAX(version), 0) FROM DbVersion
IF #version < 1
--inserts/merge for sprint 1
IF #version < 2
--inserts/merge for sprint 2
...
INSERT INTO DbVersion(#currentVersion)
What I have done on most projects is to create MERGE scripts, one per table, that populate "master" or "static" data. There are tools such as https://github.com/readyroll/generate-sql-merge that can be used to help generate these scripts.
These get called from a post-deployment script, rather than in a post-build action. I normally create a single (you're only allowed one anyway!) post-deployment script for the project, and then include all the individual static data scripts using the :r syntax. A post-deploy script is just a .sql file with a build action of "Post-Deploy", this can be created "manually" or by using the "Add New Object" dialog in SSDT and selecting Script -> Post-Deployment Script.
These files (including the post-deploy script) can then be versioned along with the rest of your source files; if you make a change to the table definition that requires a change in the merge statement that populates the data, then these changes can be committed together.
When you build the dacpac, all the master data will be included, and since you are using merge rather than insert, you are guaranteed that at the end of the deployment the contents of the tables will match the contents of your source control, just as SSDT/sqlpackage guarantees that the structure of your tables matches the structure of their definitions in source control.
I'm not clear on how the notion of a "sprint" comes into this, unless a "sprint" means a "release"; in this case the dacpac that is built and released at the end of the sprint will contain all the changes, both structural and "master data" added during the sprint. I think it's probably wise to keep the notion of a "sprint" well away from your source control!

Maintain SQL Server scripts

Our firm does not have a dedicated DBA employed but does have select developers performing DBA functions. We update our database often during a development cycle and have a release script with the various updates. We keep our db schema and objects in Visual Studio in a Database Project.
However, we often encounter two stumbling block problems that causes time-intensive manual intervention:
Developers cannot always sync from the Database Project to their local database because if we have added a NOT NULL field to an existing table that contains data then the Deploy process for VS to the db isn't smart enough to automagically insert "test" data just get the field into the table (unless this is a setting someplace?). We would of course follow this up, if possible, with a script to populate the field with real data, but we can't because the deployment fails.
Sometimes a developer will restore a backup from any past random date. There is no way of knowing exactly which db updates were applied to this database, so they don't know which scripts to start applying. What we do in this case is to check each script, chronologically, to see if the changes from that script have been applied to the database. If so, move on to the next script to run. Repeat.
One method we have discussed is potentially creating a "Database Update Level" table in the database with 1 field, 1 row. It would maintain the level that the database has been updated through. For example, when the first script is run, update the level to 2. In each db script, we would wrap the statements in a check such as
IF Database_Update_Level < 2 THEN
do some things here
UPDATE Database_Update_Level SET Database_Update_Level = 2
END IF
The db scripts can then be run on any database because the individual statement won't execute below a certain level.
This feels like we're missing something because this must be a common problem that every development shop that allows developers to develop locally encounters.
Any insights would be greatly appreciated.
Thanks.
about the restore problem, I don't see many solutions, you might try to prevent full restore and run scripts to populate the tables instead. As for versioning structures, do you use SSDT (SQL Server Data Tools) in VS ? You can generate DACPACs and generate diff scripts.
But what you say is that you also alter structures directly in the database ? No way to avoid that ? If not you could for example use DDL triggers (http://www.mssqltips.com/sqlservertip/2085/sql-server-ddl-triggers-to-track-all-database-changes/) to at least get notified that something changed.
One easy way to solve the NOT NULL problem is to establish default constraints (could just be an empty string, max number value for the data type, max date value, etc.). When the publish occurs the new column will be populated with the default value.
For the second issue I'd utilize post-deploy scripts in your SSDT project to keep the data in sync utilizing 'NOT EXISTS' to make incremental changes. That way, you can simply publish the database and allow the data updates to occur one after another.

Create copy of a database only schema

I am relativley new to MS SQL server. I need to create a test database from exisitng test data base with same schema and get the data from production and fill the newly created empty database. For this I was using generate scripts in SSMS. But now I need to do it on regular basis in a job. Please guide me how I can create empty databases automatically at a point of time.
You will have a very hard time automating the generate scripts wizard. I would suggest using something like Red-Gate's SQL Compare (or any alternative that supports command-line). You can create a new, empty database, then script a compare/deploy using the command line from SQL Server Agent.
Another, more icky alternative, is to deploy your schema and modules to the model database. You can keep this in sync using SQL Compare (or alternatives), or just be diligent about deployment of schema/module changes, then when you create a new database it will automatically inherit the current state of your schema/modules. The problem with this approach (other than depending on you keeping model in sync) is that all new databases will inherit this schema, since there currently is no way to have multiple models.
Have you considered restoring backups?
To add to Aaron's already good answer, I've been using SQLDelta for years - I think it's excellent.
(I have no connection to SqlDelta, other than being a very satisfied customer)

Stored procedures/DB schema in source control

Do you guys keep track of stored procedures and database schema in your source control system of choice?
When you make a change (add a table, update an stored proc, how do you get the changes into source control?
We use SQL Server at work, and I've begun using darcs for versioning, but I'd be curious about general strategies as well as any handy tools.
Edit: Wow, thanks for all the great suggestions, guys! I wish I could select more than one "Accepted Answer"!
We choose to script everything, and that includes all stored procedures and schema changes. No wysiwyg tools, and no fancy 'sync' programs are necessary.
Schema changes are easy, all you need to do is create and maintain a single file for that version, including all schema and data changes. This becomes your conversion script from version x to x+1. You can then run it against a production backup and integrate that into your 'daily build' to verify that it works without errors. Note it's important not to change or delete already written schema / data loading sql as you can end up breaking any sql written later.
-- change #1234
ALTER TABLE asdf ADD COLUMN MyNewID INT
GO
-- change #5678
ALTER TABLE asdf DROP COLUMN SomeOtherID
GO
For stored procedures, we elect for a single file per sproc, and it uses the drop/create form. All stored procedures are recreated at deployment. The downside is that if a change was done outside source control, the change is lost. At the same time, that's true for any code, but your DBA'a need to be aware of this. This really stops people outside the team mucking with your stored procedures, as their changes are lost in an upgrade.
Using Sql Server, the syntax looks like this:
if exists (select * from dbo.sysobjects where id = object_id(N'[dbo].[usp_MyProc]') and OBJECTPROPERTY(id, N'IsProcedure') = 1)
drop procedure [usp_MyProc]
GO
CREATE PROCEDURE [usp_MyProc]
(
#UserID INT
)
AS
SET NOCOUNT ON
-- stored procedure logic.
SET NOCOUNT OFF
GO
The only thing left to do is write a utility program that collates all the individual files and creates a new file with the entire set of updates (as a single script). Do this by first adding the schema changes then recursing the directory structure and including all the stored procedure files.
As an upside to scripting everything, you'll become much better at reading and writing SQL. You can also make this entire process more elaborate, but this is the basic format of how to source-control all sql without any special software.
addendum: Rick is correct that you will lose permissions on stored procedures with DROP/CREATE, so you may need to write another script will re-enable specific permissions. This permission script would be the last to run. Our experience found more issues with ALTER verses DROP/CREATE semantics. YMMV
create a "Database project" in Visual Studio to write and manage your sQL code and keep the project under version control together with the rest of your solution.
The solution we used at my last job was to number the scripts as they were added to source control:
01.CreateUserTable.sql
02.PopulateUserTable
03.AlterUserTable.sql
04.CreateOrderTable.sql
The idea was that we always knew which order to run the scripts, and we could avoid having to manage data integrity issues that might arise if you tried modifying script #1 (which would presumable cause the INSERTs in #2 to fail).
One thing to keep in mind with your drop/create scripts in SQL Server is that object-level permissions will be lost. We changed our standard to use ALTER scripts instead, which maintains those permissions.
There are a few other caveats, like the fact that dropping an object drops the dependency records used by sp_depends, and creating the object only creates the dependencies for that object. So if you drop/create a view, sp_depends will no longer know of any objects referencing that view.
Moral of the story, use ALTER scripts.
I agree with (and upvote) Robert Paulson's practice. That is assuming you are in control of a development team with the responsibility and discipline to adhere to such a practice.
To "force" that onto my teams, our solutions maintain at least one database project from Visual Studio Team Edition for Database Professionals. As with other projects in the solution, the database project gets versioned control. It makes it a natural development process to break the everything in the database into maintainable chunks, "disciplining" my team along the way.
Of course, being a Visual Studio project, it is no where near perfect. There are many quirks you will run into that may frustrate or confuse you. It takes a fair bit of understanding how the project works before getting it to accomplish your tasks. Examples include
deploying data from CSV files.
selective deployment of test data based on build type.
Visual Studio crashing on comparing with databases with certain type of CLR assembly embedded within.
no means of differntiation between test/production databases that implement different authentication schemes - SQL users vs Active Directory users.
But for teams who don't have a practice of versioning their database objects, this is a good start. The other famous alternative is of course, Red Gate's suite of SQL Server products, which most people who use them consider superior to Microsoft's offering.
I think you should write a script which automatically sets up your database, including any stored procedures. This script should then be placed in source control.
Couple different perspectives from my experience. In the Oracle world, everything was managed by "create" DDL scripts. As ahockley mentioned, one script for each object. If the object needs to change, its DDL script is modified. There's one wrapper script that invokes all the object scripts so that you can deploy the current DB build to whatever environment you want. This is for the main core create.
Obviously in a live application, whenever you push a new build that requires, say, a new column, you're not going to drop the table and create it new. You're going to do an ALTER script and add the column. So each time this kind of change needs to happen, there are always two things to do: 1) write the alter DDL and 2) update the core create DDL to reflect the change. Both go into source control, but the single alter script is more of a momentary point in time change since it will only be used to apply a delta.
You could also use a tool like ERWin to update the model and forward generate the DDL, but most DBAs I know don't trust a modeling tool to gen the script exactly the way they want. You could also use ERWin to reverse engineer your core DDL script into a model periodically, but that's a lot of fuss to get it to look right (every blasted time you do it).
In the Microsoft world, we employed a similar tactic, but we used the Red Gate product to help manage the scripts and deltas. Still put the scripts in source control. Still one script per object (table, sproc, whatever). In the beginning, some of the DBAs really preferred using the SQL Server GUI to manage the objects rather than use scripts. But that made it very difficult to manage the enterprise consistently as it grew.
If the DDL is in source control, it's trivial to use any build tool (usually ant) to write a deployment script.
I've found that by far, the easiest, fastest and safest way to do this is to just bite the bullet and use SQL Source Control from RedGate. Scripted and stored in the repository in a matter of minutes. I just wish that RedGate would look at the product as a loss leader so that it could get more widespread use.
Similar to Robert Paulson, above, our organization keeps the database under source control. However, our difference is that we try to limit the number of scripts we have.
For any new project, there's a set procedure. We have a schema creation script at version 1, a stored proc creation script and possibly an initial data load creation script. All procs are kept in a single, admittedly massive file. If we're using Enterprise Library, we include a copy of the creation script for logging; if it's an ASP.NET project using the ASP.NET application framework (authentication, personalization, etc.), we include that script as well. (We generated it from Microsoft's tools, then tweaked it until it worked in a replicable fashion across different sites. Not fun, but a valuable time investment.)
We use the magic CTRL+F to find the proc we like. :) (We'd love it if SQL Management Studio had code navigation like VS does. Sigh!)
For subsequent versions, we usually have upgradeSchema, upgradeProc and/or updateDate scripts. For schema updates, we ALTER tables as much as possible, creating new ones as needed. For proc updates, we DROP and CREATE.
One wrinkle does pop up with this approach. It's easy to generate a database, and it's easy to get a new one up to speed on the current DB version. However, care has to be taken with DAL generation (which we currently -- usually -- do with SubSonic), to ensure that DB/schema/proc changes are synchronized cleanly with the code used to access them. However, in our build paths is a batch file which generates the SubSonic DAL, so it's our SOP to checkout the DAL code, re-run that batch file, then check it all back in anytime the schema and/or procs change. (This, of course, triggers a source build, updating shared dependencies to the appropriate DLLs ... )
In past experiences, I've kept database changes source controlled in such a way that for each release of the product any database changes were always scripted out and stored in the release that we're working on. The build process in place would automatically bring the database up to the current version based on a table in the database that stored the current version for each "application". A custom .net utility application we wrote would then run and determine the current version of the database, and run any new scripts against it in order of the prefix numbers of the scripts. Then we'd run unit tests to make sure everything was all good.
We'd store the scripts in source control as follows (folder structure below):
I'm a little rusty on current naming conventions on tables and stored procedures so bare with my example...
[root]
[application]
[version]
[script]
\scripts
MyApplication\
1.2.1\
001.MyTable.Create.sql
002.MyOtherTable.Create.sql
100.dbo.usp.MyTable.GetAllNewStuff.sql
With the use of a Versions table that would take into account the Application and Version the application would restore the weekly production backup, and run all the scripts needed against the database since the current version. By using .net we were easily able to package this into a transaction and if anything failed we would rollback, and send emails out, so we knew that release had bad scripts.
So, all developers would make sure to maintain this in source control so the coordinated release would make sure that all the scripts we plan to run against the database would run successfully.
This is probably more information than you were looking for, but it worked very well for us and given the structure it was easy to get all developers on board.
When release day came around the operations team would follow the release notes and pick up the scripts from source control and run the package against the database with the .net application we used during the nightly build process which would automatically package the scripts in transactions so if something failed it would automatically roll back and no impact to the database was made.
Stored procedures get 1 file per sp with the standard if exist drop/create statements at the top. Views and functions also get their own files so they are easier to version and reuse.
Schema is all 1 script to begin with then we'll do version changes.
All of this is stored in a visual studio database project connected to TFS (# work or VisualSVN Server # home for personal stuff) with a folder structure as follows:
- project
-- functions
-- schema
-- stored procedures
-- views
At my company, we tend to store all database items in source control as individual scripts just as you would for individual code files. Any updates are first made in the database and then migrated into the source code repository so a history of changes is maintained.
As a second step, all database changes are migrated to an integration database. This integration database represents exactly what the production database should look like post deployment. We also have a QA database which represents the current state of production (or the last deployment). Once all changes are made in the Integration database, we use a schema diff tool (Red Gate's SQL Diff for SQL Server) to generate a script that will migrate all changes from one database to the other.
We have found this to be fairly effective as it generates a single script that we can integrate with our installers easily. The biggest issue we often have is developers forgetting to migrate their changes into integration.
We keep stored procedures in source control.
Script everything (object creation, etc) and store those scripts in source control. How do the changes get there? It's part of the standard practice of how things are done. Need to add a table? Write a CREATE TABLE script. Update a sproc? Edit the stored procedure script.
I prefer one script per object.
For procs, write the procs with script wrappers into plain files, and apply the changes from those files. If it applied correctly, then you can check in that file, and you'll be able to reproduce it from that file as well.
For schema changes, you may need to check in scripts to incrementally make the changes you've made. Write the script, apply it, and then check it in. You can build a process then, to automatically apply each schema script in series.
We do keep stored procedures in source control. The way we (or at least I) do it is add a folder to my project, add a file for each SP and manually copy, paste the code into it. So when I change the SP, I manually need to change the file the source control.
I'd be interested to hear if people can do this automatically.
I highly recommend maintaining schema and stored procedures in source control.
Keeping stored procedures versioned allows them to be rolled back when determined to be problematic.
Schema is a less obvious answer depending on what you mean. It is very useful to maintain the SQL that defines your tables in source control, for duplicating environments (prod/dev/user etc.).
We have been using an alternative approach in my current project - we haven't got the db under source control but instead have been using a database diff tool to script out the changes when we get to each release.
It has been working very well so far.
We store everything related to an application in our SCM. The DB scripts are generally stored in their own project, but are treated just like any other code... design, implement, test, commit.
I run a job to script it out to a formal directory structure.
The following is VS2005 code, command line project, called from a batch file, that does the work. app.config keys at end of code.
It is based on other code I found online. Slightly a pain to set up, but works well once you get it working.
Imports Microsoft.VisualStudio.SourceSafe.Interop
Imports System
Imports System.Configuration
Module Module1
Dim sourcesafeDataBase As String, sourcesafeUserName As String, sourcesafePassword As String, sourcesafeProjectName As String, fileFolderName As String
Sub Main()
If My.Application.CommandLineArgs.Count > 0 Then
GetSetup()
For Each thisOption As String In My.Application.CommandLineArgs
Select Case thisOption.ToUpper
Case "CHECKIN"
DoCheckIn()
Case "CHECKOUT"
DoCheckOut()
Case Else
DisplayUsage()
End Select
Next
Else
DisplayUsage()
End If
End Sub
Sub DisplayUsage()
Console.Write(System.Environment.NewLine + "Usage: SourceSafeUpdater option" + System.Environment.NewLine + _
"CheckIn - Check in ( and adds any new ) files in the directory specified in .config" + System.Environment.NewLine + _
"CheckOut - Check out all files in the directory specified in .config" + System.Environment.NewLine + System.Environment.NewLine)
End Sub
Sub AddNewItems()
Dim db As New VSSDatabase
db.Open(sourcesafeDataBase, sourcesafeUserName, sourcesafePassword)
Dim Proj As VSSItem
Dim Flags As Integer = VSSFlags.VSSFLAG_DELTAYES + VSSFlags.VSSFLAG_RECURSYES + VSSFlags.VSSFLAG_DELNO
Try
Proj = db.VSSItem(sourcesafeProjectName, False)
Proj.Add(fileFolderName, "", Flags)
Catch ex As Exception
If Not ex.Message.ToString.ToLower.IndexOf("already exists") > 0 Then
Console.Write(ex.Message)
End If
End Try
Proj = Nothing
db = Nothing
End Sub
Sub DoCheckIn()
AddNewItems()
Dim db As New VSSDatabase
db.Open(sourcesafeDataBase, sourcesafeUserName, sourcesafePassword)
Dim Proj As VSSItem
Dim Flags As Integer = VSSFlags.VSSFLAG_DELTAYES + VSSFlags.VSSFLAG_UPDUPDATE + VSSFlags.VSSFLAG_FORCEDIRYES + VSSFlags.VSSFLAG_RECURSYES
Proj = db.VSSItem(sourcesafeProjectName, False)
Proj.Checkin("", fileFolderName, Flags)
Dim File As String
For Each File In My.Computer.FileSystem.GetFiles(fileFolderName)
Try
Proj.Add(fileFolderName + File)
Catch ex As Exception
If Not ex.Message.ToString.ToLower.IndexOf("access code") > 0 Then
Console.Write(ex.Message)
End If
End Try
Next
Proj = Nothing
db = Nothing
End Sub
Sub DoCheckOut()
Dim db As New VSSDatabase
db.Open(sourcesafeDataBase, sourcesafeUserName, sourcesafePassword)
Dim Proj As VSSItem
Dim Flags As Integer = VSSFlags.VSSFLAG_REPREPLACE + VSSFlags.VSSFLAG_RECURSYES
Proj = db.VSSItem(sourcesafeProjectName, False)
Proj.Checkout("", fileFolderName, Flags)
Proj = Nothing
db = Nothing
End Sub
Sub GetSetup()
sourcesafeDataBase = ConfigurationManager.AppSettings("sourcesafeDataBase")
sourcesafeUserName = ConfigurationManager.AppSettings("sourcesafeUserName")
sourcesafePassword = ConfigurationManager.AppSettings("sourcesafePassword")
sourcesafeProjectName = ConfigurationManager.AppSettings("sourcesafeProjectName")
fileFolderName = ConfigurationManager.AppSettings("fileFolderName")
End Sub
End Module
<add key="sourcesafeDataBase" value="C:\wherever\srcsafe.ini"/>
<add key="sourcesafeUserName" value="vssautomateuserid"/>
<add key="sourcesafePassword" value="pw"/>
<add key="sourcesafeProjectName" value="$/where/you/want/it"/>
<add key="fileFolderName" value="d:\yourdirstructure"/>
If you're looking for an easy, ready-made solution, our Sql Historian system uses a background process to automatically synchronizes DDL changes to TFS or SVN, transparent to anyone making changes on the database. In my experience, the big problem is maintaining the code in source control with what was changed on your server--and that's because usually you have to rely on people (developers, even!) to change their workflow and remember to check in their changes after they've already made it on the server. Putting that burden on a machine makes everyone's life easier.

Resources