Python - extracting a SQL Server database schema to a file - sql-server

Often I need to extract the complete schema of an existing SQL Server DB to a file. I need to cover every object -- tables, views, functions, SPs, UDDTs, triggers, etc. The purpose is so that I can then use a file-diff utility to compare that schema to a baseline reference.
Normally I use Enterprise Manager or Management Studio to script out the DB objects and then concatenate those files to make one big file in a consistent predictable order. I was wondering whether there's a way to accomplish this task in Python? Obviously it'd take an additional package, but having looked at a few (pyodbc, SQLAlchemy, SQLObject), none of them seem really suited to this use case.

If you can connect to SQL Server and run queries in Python then yes – it’s possible but it will take a lot of effort and testing to get it to work correctly.
Idea is to use system tables to get details about each object and then generate DDL statements based on this. Some if not all DDL statements already exist in sys.syscomments table.
Start off by executing and examining this in SSMS before you start working in Python.
select *
from sys.tables
select *
from sys.all_columns
select *
from sys.views
select *
from sys.syscomments
All system tables documentation from MSDN.

I've used this PowerShell strategy in the past. Obviously, that isn't Python, but it is a script you can write then execute from within Python. Give this article a read as it may be your easiest (and cheapest) solution: http://blogs.technet.com/b/heyscriptingguy/archive/2010/11/04/use-powershell-to-script-sql-database-objects.aspx
As a disclaimer, I was only exporting stored procedures, not every single object.

Related

How to run raw SQL to deploy database changes

We intend to create DACPAC files using SQL database projects and distribute them automatically to several environments, DEV/QA/PROD, using Azure Pipeline. I can make changes to the schema for a table, view, function, or procedure, but I'm not sure how we can update specific data in a table. I am sure this is very common use case but unfortunately I am having hard time implementing it.
Any idea how can I automate creating/updating/deleting a row for a table?
E.g.: update myTable set myColumn = 5 where someColumn = 'condition'
In your database project you can add a Post Deployment Script
Do not. Seriously. I found DACPAC always to be WAY too limiting for serious operations. Look how the SQL is generated and - realize how little control you have.
The standard approach is to have deployment scripts that you generate and that do the changes in the database, plus a table in the db tracking which have executed (possibly with a checksum so you do not need t change the name to update them).
You can easily generate them partially by schema compare (and then generate the change script), but those also allow you to do things like data scrubbing and multi step transformations that DACPAC by design cannot efficiently and easily do.
There are plenty of frameworks for this around. They generally belong in the category of developer tools.

Common function / stored procedures for all databases

We have a database server and it has about 10 databases.
I would like to create some functions / stored procedures which can be used in all databases.
For example, we can use sp_executesql in any database.
We have some requirements like that (getting current academic year, financial year, etc...)
Is it doable?
As others have suggested, you could put objects into the master database, but Microsoft explicitly recommends that you should not do that. I find that solution to be rather risky anyway, because the master database is 'owned' by the system, not by you, so there are no guarantees that it will continue to behave in the same way in the future.
Instead, I would consider this to be primarily a deployment issue. There are (at least) two strategies you could use:
Deploy the objects to every database
Deploy them to one 'reference' database that is only used for shared objects and create synonyms in the other databases
The second option is perhaps the better one, because if your functions use tables (e.g. you use a calendar table to get the academic year, which is much easier than calculating it) then you would have to create the same tables in every database too. By using synonyms, you only have to maintain one set of tables.
For the actual deployment, it's straightforward to use scripting to do manage the objects, because you just need a list of databases to connect to and run each DDL script against. You can do that using batch files and SQLCMD (perhaps with SQLCMD variables in your .sql scripts), or drive it from PowerShell or any other language that you prefer.
Depending upon what the SP actually does, you want to create the procedure in master, name it with sp_ and mark it as a system procedure:
http://weblogs.sqlteam.com/mladenp/archive/2007/01/18/58287.aspx
A couple of options:
You can use a system stored procedure as Cade says. I've done this in the past and it works ok. One warning on this is that the sp_MS_marksystemobject procedure is undocumented, which may mean that it could vanish or change without warning in future SQL versions. Thinking back I think there were other problems using this approach with functions though.
Another approach is to use standardized procedure and functions, and roll them out across your databases using sp_MSforeachdb to run code against every database. If you need to run against only your 10 database you can take copy the code in this procedure and modify it to check that a database matches your schema before running the code (or you can write your own version that does a similar thing).

Is there a good way to verify if a database schema is correct after an upgrade or migration?

We have customers who are upgrading from one database version to another (Oracle 9i to Oracle 10g or 11g to be specific). In one case, a customer exported the old database and imported it into the new one, but for some reason the indexes and constraints didn't get created. They may have done this on purpose to speed up the import process, but we're still looking into the reason why.
The real question is, is there a simple way that we can verify that the structure of the database is complete after the import? Is there some sort of checksum that we can do on the structure? We realize that we could do a bunch of queries to see if all the tables, indexes, aliases, views, sequences, etc. exist, but this would probably be difficult to write and maintain.
Update
Thanks for the answers suggesting commercial and/or GUI tools to use, but we really need something free that we could package with our product. It also has to be command line or script driven so our customers can run it in any environment (unix, linux, windows).
Presuming a single schema, something like this - dump USER_OBJECTS into a table before migration.
CREATE TABLE SAVED_USER_OBJECTS AS SELECT * FROM USER_OBJECTS
Then to validate after your migration
SELECT object_type, object_name FROM SAVED_USER_OBJECTS
MINUS
SELECT object_type, object_name FROM USER_OBJECTS
One issue is if you have intentionally dropped objects between versions you will also need to delete the from SAVED_USER_OBJECTS. Also this will not pick up if the wrong version of objects exist.
If you have multiple schemas, then the same thing is required for each schema OR use ALL_OBJECTS and extract/compare for the relevant user schemas.
You could also do a hash/checksum on object_type||object_name for the whole schema (save before/compare after) but the cost of calculation wouldn't be that different from comparing the two tables on indexes.
If you are willing to spend some, DBDiff is an efficient utility that does exactly what you need.
http://www.dkgas.com/oradbdiff.htm
In SQL DEVELOPER (the free Oracle utility) there is a Database Schema Differences feature.
It's worth to try it.
Hope it helps.
SQL Developer - download
Roni.
I wouldn't write the check script, I'd write a program to generate the check script from a particular version of the database. Just go though the metatdata and record what's there and write it to a file, then compare the values in that file against the values in the customer's database. This won't work so well if you use system-generated names for your constraints, but it is probably enough to just verify that things are there. Dropping indexes and constraints is pretty common when migrating a database, so you might not even need to check too much; if two or three things are missing, then it's not unreasonable to assume they all are. You might also want to write a script that drops all the constraints and indexes and re-creates them, and just have your customers run that as a post-migration step. Just be sure you drop everything by name, so you don't delete any custom indexes your customer might have created.

SQL Command for generating schema text (similar to CreateTo or AlterTo)

SQL Server 2005. Is there a sql query that will return a text field containing the same type of schema info as you would find in doing a right click table -> Script Table As -> Create To (or Alter To) from SQL Server Management Studio ?
I'm looking for a single/flat format that describes the entire table, including constraints, indices, etc.
I am aware of:
sp_help table_name
but that doesn't provide the single flat format I'm looking for. Ideally it would be in a scriptable format, such as the AlterTo result that could be executed against the server.
This is for a scheduled process that documents table schemas on a nightly basis for checking in to version control (SVN).
Not really. A table def is a collection of columns, constraints etc.
There is an SVN plugin that may help called ScriptDB4SVN. I've not used it personally, I'm going on hearsay.
Was searching the 'net again for an answer to this, and came across this SO question. It doesn't accurately capture all the same data as SQL Management Studios Create-to, but enough for my purposes (scripting the database structure for version control purposes).
There is no such command in SQL Server. This is primarily because the Scripting facilitiy is actually in SMO and not in SQL Server itself. There are a number of free console command-line tools that can do it that you could call via xp_CmdShell.
However, if you really want to do this from T-SQL, then you will need a script or stored procedure that enumerates all of the tables attributes, columns, column datatypes, defaults, nullabilty, etc. etc. and then reassembles it into a CREATE TABLE script. This is a Huge task. That's the bad news. The good news is that someone (Lowell Izaguirre) has already done this and posted it in this article (http://www.sqlservercentral.com/scripts/Miscellaneous/30730/) at SQLServerCentral.Com.
Enjoy.
Not really - you can either use C# (or VB.NET) and SMO (SQL Management Objects) to script out your database objects (tables and all), or you can use SQL to get the list of columns for a table:
SELECT * FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = 'Your Table Name here'
But I don't know of any easy way in SQL itself to create Create/Alter scripts for database objects, sorry.
Marc

How to do version control for SQL Server database?

I want to get my databases under version control.
I'll always want to have at least some data in there (as alumb mentions: user types and administrators). I'll also often want a large collection of generated test data for performance measurements.
How would I apply version control to my database?
Martin Fowler wrote my favorite article on the subject, http://martinfowler.com/articles/evodb.html. I choose not to put schema dumps in under version control as alumb and others suggest because I want an easy way to upgrade my production database.
For a web application where I'll have a single production database instance, I use two techniques:
Database Upgrade Scripts
A sequence database upgrade scripts that contain the DDL necessary to move the schema from version N to N+1. (These go in your version control system.) A _version_history_ table, something like
create table VersionHistory (
Version int primary key,
UpgradeStart datetime not null,
UpgradeEnd datetime
);
gets a new entry every time an upgrade script runs which corresponds to the new version.
This ensures that it's easy to see what version of the database schema exists and that database upgrade scripts are run only once. Again, these are not database dumps. Rather, each script represents the changes necessary to move from one version to the next. They're the script that you apply to your production database to "upgrade" it.
Developer Sandbox Synchronization
A script to backup, sanitize, and shrink a production database. Run this after each upgrade to the production DB.
A script to restore (and tweak, if necessary) the backup on a developer's workstation. Each developer runs this script after each upgrade to the production DB.
A caveat: My automated tests run against a schema-correct but empty database, so this advice will not perfectly suit your needs.
Red Gate's SQL Compare product not only allows you to do object-level comparisons, and generate change scripts from that, but it also allows you to export your database objects into a folder hierarchy organized by object type, with one [objectname].sql creation script per object in these directories. The object-type hierarchy is like this:
\Functions
\Security
\Security\Roles
\Security\Schemas
\Security\Users
\Stored Procedures
\Tables
If you dump your scripts to the same root directory after you make changes, you can use this to update your SVN repo, and keep a running history of each object individually.
This is one of the "hard problems" surrounding development. As far as I know there are no perfect solutions.
If you only need to store the database structure and not the data you can export the database as SQL queries. (in Enterprise Manager: Right click on database -> Generate SQL script. I recommend setting the "create one file per object" on the options tab) You can then commit these text files to svn and make use of svn's diff and logging functions.
I have this tied together with a Batch script that takes a couple parameters and sets up the database. I also added some additional queries that enter default data like user types and the admin user. (If you want more info on this, post something and I can put the script somewhere accessible)
If you need to keep all of the data as well, I recommend keeping a back up of the database and using Redgate (http://www.red-gate.com/) products to do the comparisons. They don't come cheap, but they are worth every penny.
First, you must choose the version control system that is right for you:
Centralized Version Control system - a standard system where users check out/check in before/after they work on files, and the files are being kept in a single central server
Distributed Version Control system - a system where the repository is being cloned, and each clone is actually the full backup of the repository, so if any server crashes, then any cloned repository can be used to restore it
After choosing the right system for your needs, you'll need to setup the repository which is the core of every version control system
All this is explained in the following article: http://solutioncenter.apexsql.com/sql-server-source-control-part-i-understanding-source-control-basics/
After setting up a repository, and in case of a central version control system a working folder, you can read this article. It shows how to setup source control in a development environment using:
SQL Server Management Studio via the MSSCCI provider,
Visual Studio and SQL Server Data Tools
A 3rd party tool ApexSQL Source Control
Here at Red Gate we offer a tool, SQL Source Control, which uses SQL Compare technology to link your database with a TFS or SVN repository. This tool integrates into SSMS and lets you work as you would normally, except it now lets you commit the objects.
For a migrations-based approach (more suited for automated deployments), we offer SQL Change Automation (formerly called ReadyRoll), which creates and manages a set of incremental scripts as a Visual Studio project.
In SQL Source Control it is possible to specify static data tables. These are stored in source control as INSERT statements.
If you're talking about test data, we'd recommend that you either generate test data with a tool or via a post-deployment script you define, or you simply restore a production backup to the dev environment.
You might want to look at Liquibase (http://www.liquibase.org/). Even if you don't use the tool itself it handles the concepts of database change management or refactoring pretty well.
+1 for everyone who's recommended the RedGate tools, with an additional recommendation and a caveat.
SqlCompare also has a decently documented API: so you can, for instance, write a console app which syncs your source controlled scripts folder with a CI integration testing database on checkin, so that when someone checks in a change to the schema from their scripts folder it's automatically deployed along with the matching application code change. This helps close the gap with developers who are forgetful about propagating changes in their local db up to a shared development DB (about half of us, I think :) ).
A caveat is that with a scripted solution or otherwise, the RedGate tools are sufficiently smooth that it's easy to forget about SQL realities underlying the abstraction. If you rename all the columns in a table, SqlCompare has no way to map the old columns to the new columns and will drop all the data in the table. It will generate warnings but I've seen people click past that. There's a general point here worth making, I think, that you can only automate DB versioning and upgrade so far - the abstractions are very leaky.
With VS 2010, use the Database project.
Script out your database
Make changes to scripts or directly on
your db server
Sync up using Data >
Schema Compare
Makes a perfect DB versioning solution, and makes syncing DB's a breeze.
We use DBGhost to manage our SQL database. Then you put your scripts to build a new database in your version control, and it'll either build a new database, or upgrade any existing database to the schema in version control. That way you don't have to worry about creating change scripts (although you can still do that, if for example you want to change the data type of a column and need to convert data).
It is a good approach to save database scripts into version control with change scripts so that you can upgrade any one database you have. Also you might want to save schemas for different versions so that you can create a full database without having to apply all the change scripts. Handling the scripts should be automated so that you don't have to do manual work.
I think its important to have a separate database for every developer and not use a shared database. That way the developers can create test cases and development phases independently from other developers.
The automating tool should have means for handling database metadata, which tells what databases are in what state of development and which tables contain version controllable data and so on.
You could also look at a migrations solution. These allow you to specify your database schema in C# code, and roll your database version up and down using MSBuild.
I'm currently using DbUp, and it's been working well.
You didn't mention any specifics about your target environment or constraints, so this may not be entirely applicable... but if you're looking for a way to effectively track an evolving DB schema and aren't adverse to the idea of using Ruby, ActiveRecord's migrations are right up your alley.
Migrations programatically define database transformations using a Ruby DSL; each transformation can be applied or (usually) rolled back, allowing you to jump to a different version of your DB schema at any given point in time. The file defining these transformations can be checked into version control like any other piece of source code.
Because migrations are a part of ActiveRecord, they typically find use in full-stack Rails apps; however, you can use ActiveRecord independent of Rails with minimal effort. See here for a more detailed treatment of using AR's migrations outside of Rails.
Every database should be under source-code control. What is lacking is a tool to automatically script all database objects - and "configuration data" - to file, which then can be added to any source control system. If you are using SQL Server, then my solution is here : http://dbsourcetools.codeplex.com/ . Have fun.
- Nathan.
It's simple.
When the base project is ready then you must create full database script. This script is commited to SVN. It is first version.
After that all developers creates change scripts (ALTER..., new tables, sprocs, etc).
When you need current version then you should execute all new change scripts.
When app is released to production then you go back to 1 (but then it will be successive version of course).
Nant will help you to execute those change scripts. :)
And remember. Everything works fine when there is discipline. Every time when database change is commited then corresponding functions in code are commited too.
Because our app has to work across multiple RDBMSs, we store our schema definition in version control using the database-neutral Torque format (XML). We also version-control the reference data for our database in XML format as follows (where "Relationship" is one of the reference tables):
<Relationship RelationshipID="1" InternalName="Manager"/>
<Relationship RelationshipID="2" InternalName="Delegate"/>
etc.
We then use home-grown tools to generate the schema upgrade and reference data upgrade scripts that are required to go from version X of the database to version X + 1.
If you have a small database and you want to version the entire thing, this batch script might help. It detaches, compresses, and checks a MSSQL database MDF file in to Subversion.
If you mostly want to version your schema and just have a small amount of reference data, you can possibly use SubSonic Migrations to handle that. The benefit there is that you can easily migrate up or down to any specific version.
We don't store the database schema, we store the changes to the database. What we do is store the schema changes so that we build a change script for any version of the database and apply it to our customer's databases. I wrote an database utility app that gets distributed with our main application that can read that script and know which updates need to be applied. It also has enough smarts to refresh views and stored procedures as needed.
To make the dump to a source code control system that little bit faster, you can see which objects have changed since last time by using the version information in sysobjects.
Setup: Create a table in each database you want to check incrementally to hold the version information from the last time you checked it (empty on the first run). Clear this table if you want to re-scan your whole data structure.
IF ISNULL(OBJECT_ID('last_run_sysversions'), 0) <> 0 DROP TABLE last_run_sysversions
CREATE TABLE last_run_sysversions (
name varchar(128),
id int, base_schema_ver int,
schema_ver int,
type char(2)
)
Normal running mode: You can take the results from this sql, and generate sql scripts for just the ones you're interested in, and put them into a source control of your choice.
IF ISNULL(OBJECT_ID('tempdb.dbo.#tmp'), 0) <> 0 DROP TABLE #tmp
CREATE TABLE #tmp (
name varchar(128),
id int, base_schema_ver int,
schema_ver int,
type char(2)
)
SET NOCOUNT ON
-- Insert the values from the end of the last run into #tmp
INSERT #tmp (name, id, base_schema_ver, schema_ver, type)
SELECT name, id, base_schema_ver, schema_ver, type FROM last_run_sysversions
DELETE last_run_sysversions
INSERT last_run_sysversions (name, id, base_schema_ver, schema_ver, type)
SELECT name, id, base_schema_ver, schema_ver, type FROM sysobjects
-- This next bit lists all differences to scripts.
SET NOCOUNT OFF
--Renamed.
SELECT 'renamed' AS ChangeType, t.name, o.name AS extra_info, 1 AS Priority
FROM sysobjects o INNER JOIN #tmp t ON o.id = t.id
WHERE o.name <> t.name /*COLLATE*/
AND o.type IN ('TR', 'P' ,'U' ,'V')
UNION
--Changed (using alter)
SELECT 'changed' AS ChangeType, o.name /*COLLATE*/,
'altered' AS extra_info, 2 AS Priority
FROM sysobjects o INNER JOIN #tmp t ON o.id = t.id
WHERE (
o.base_schema_ver <> t.base_schema_ver
OR o.schema_ver <> t.schema_ver
)
AND o.type IN ('TR', 'P' ,'U' ,'V')
AND o.name NOT IN ( SELECT oi.name
FROM sysobjects oi INNER JOIN #tmp ti ON oi.id = ti.id
WHERE oi.name <> ti.name /*COLLATE*/
AND oi.type IN ('TR', 'P' ,'U' ,'V'))
UNION
--Changed (actually dropped and recreated [but not renamed])
SELECT 'changed' AS ChangeType, t.name, 'dropped' AS extra_info, 2 AS Priority
FROM #tmp t
WHERE t.name IN ( SELECT ti.name /*COLLATE*/ FROM #tmp ti
WHERE NOT EXISTS (SELECT * FROM sysobjects oi
WHERE oi.id = ti.id))
AND t.name IN ( SELECT oi.name /*COLLATE*/ FROM sysobjects oi
WHERE NOT EXISTS (SELECT * FROM #tmp ti
WHERE oi.id = ti.id)
AND oi.type IN ('TR', 'P' ,'U' ,'V'))
UNION
--Deleted
SELECT 'deleted' AS ChangeType, t.name, '' AS extra_info, 0 AS Priority
FROM #tmp t
WHERE NOT EXISTS (SELECT * FROM sysobjects o
WHERE o.id = t.id)
AND t.name NOT IN ( SELECT oi.name /*COLLATE*/ FROM sysobjects oi
WHERE NOT EXISTS (SELECT * FROM #tmp ti
WHERE oi.id = ti.id)
AND oi.type IN ('TR', 'P' ,'U' ,'V'))
UNION
--Added
SELECT 'added' AS ChangeType, o.name /*COLLATE*/, '' AS extra_info, 4 AS Priority
FROM sysobjects o
WHERE NOT EXISTS (SELECT * FROM #tmp t
WHERE o.id = t.id)
AND o.type IN ('TR', 'P' ,'U' ,'V')
AND o.name NOT IN ( SELECT ti.name /*COLLATE*/ FROM #tmp ti
WHERE NOT EXISTS (SELECT * FROM sysobjects oi
WHERE oi.id = ti.id))
ORDER BY Priority ASC
Note: If you use a non-standard collation in any of your databases, you will need to replace /* COLLATE */ with your database collation. i.e. COLLATE Latin1_General_CI_AI
I wrote this app a while ago, http://sqlschemasourcectrl.codeplex.com/ which will scan your MSFT SQL db's as often as you want and automatically dump your objects (tables, views, procs, functions, sql settings) into SVN. Works like a charm. I use it with Unfuddle (which allows me to get alerts on checkins)
The typical solution is to dump the database as necessary and backup those files.
Depending on your development platform, there may be opensource plugins available. Rolling your own code to do it is usually fairly trivial.
Note: You may want to backup the database dump instead of putting it into version control. The files can get huge fast in version control, and cause your entire source control system to become slow (I'm recalling a CVS horror story at the moment).
It's a very old question, however, many people are trying to solve this even now. All they have to do is to research about Visual Studio Database Projects. Without this, any database development looks very feeble. From code organization to deployment to versioning, it simplifies everything.
We needed to version our SQL database after we migrated to an x64 platform and our old version broke with the migration. We wrote a C# application which used SQLDMO to map out all of the SQL objects to a folder:
Root
ServerName
DatabaseName
Schema Objects
Database Triggers*
.ddltrigger.sql
Functions
..function.sql
Security
Roles
Application Roles
.approle.sql
Database Roles
.role.sql
Schemas*
.schema.sql
Users
.user.sql
Storage
Full Text Catalogs*
.fulltext.sql
Stored Procedures
..proc.sql
Synonyms*
.synonym.sql
Tables
..table.sql
Constraints
...chkconst.sql
...defconst.sql
Indexes
...index.sql
Keys
...fkey.sql
...pkey.sql
...ukey.sql
Triggers
...trigger.sql
Types
User-defined Data Types
..uddt.sql
XML Schema Collections*
..xmlschema.sql
Views
..view.sql
Indexes
...index.sql
Triggers
...trigger.sql
The application would then compare the newly written version with the version stored in SVN, and if there were differences it would update SVN.
We determined that running the process once a night was sufficient since we did not make that many changes to SQL. It allows us to track changes to all the objects we care about plus it allows us to rebuild our full schema in the event of a serious problem.
I agree with ESV answer and for that exact reason I started a little project a while back to help maintain database updates in a very simple file which could then be maintained a long side out source code. It allows easy updates to developers as well as UAT and Production. The tool works on SQL Server and MySQL.
Some project features:
Allows schema changes
Allows value tree population
Allows separate test data inserts for eg. UAT
Allows option for rollback (not automated)
Maintains support for SQL server and MySQL
Has the ability to import your existing database into version control with one simple command (SQL server only ... still working on MySQL)
Please check out the code for some more information.
We just started using Team Foundation Server. If your database is medium sized, then visual studio has some nice project integrations with built in compare, data compare, database refactoring tools, database testing framework, and even data generation tools.
But, that model doesn't fit very large or third party databases (that encrypt objects) very well. So, what we've done is to store only our customized objects. Visual Studio / Team foundation server works very well for that.
TFS Database chief arch. blog
MS TFS site
I'm also using a version in the database stored via the database extended properties family of procedures. My application has scripts for each version step (ie. move from 1.1 to 1.2). When deployed, it looks at the current version and then runs the scripts one by one until it reaches the last app version. There is no script that has the straight 'final' version, even deploy on a clean DB does the deploy via a series of upgrade steps.
Now what I like to add is that I've seen two days ago a presentation on the MS campus about the new and upcoming VS DB edition. The presentation was focused specifically on this topic and I was blown out of the water. You should definitely check it out, the new facilities are focused on keeping schema definition in T-SQL scripts (CREATEs), a runtime delta engine to compare deployment schema with defined schema and doing the delta ALTERs and integration with source code integration, up to and including MSBUILD continuous integration for automated build drops. The drop will contain a new file type, the .dbschema files, that can be taken to the deployment site and a command line tool can do the actual 'deltas' and run the deployment.
I have a blog entry on this topic with links to the VSDE downloads, you should check them out: http://rusanu.com/2009/05/15/version-control-and-your-database/
A while ago I found a VB bas module that used DMO and VSS objects to get an entire db scripted off and into VSS. I turned it into a VB Script and posted it here. You can easily take out the VSS calls and use the DMO stuff to generate all the scripts, and then call SVN from the same batch file that calls the VBScript to check them in.
In my experience the solution is twofold:
You need to handle changes to the development database that are done by multiple developers during development.
You need to handle database upgrades in customers sites.
In order to handle #1 you'll need a strong database diff/merge tool. The best tool should be able to perform automatic merge as much as possible while allowing you to resolve unhandled conflicts manually.
The perfect tool should handle merge operations by using a 3-way merge algorithm that brings into account the changes that were made in the THEIRS database and the MINE database, relative to the BASE database.
I wrote a commercial tool that provides manual merge support for SQLite databases and I'm currently adding support for 3-way merge algorithm for SQLite. Check it out at http://www.sqlitecompare.com
In order to handle #2 you will need an upgrade framework in place.
The basic idea is to develop an automatic upgrade framework that knows how to upgrade from an existing SQL schema to the newer SQL schema and can build an upgrade path for every existing DB installation.
Check out my article on the subject in http://www.codeproject.com/KB/database/sqlite_upgrade.aspx to get a general idea of what I'm talking about.
Good Luck
Liron Levi
Check out DBGhost http://www.innovartis.co.uk/. I have used in an automated fashion for 2 years now and it works great. It allows our DB builds to happen much like a Java or C build happens, except for the database. You know what I mean.
I would suggest using comparison tools to improvise a version control system for your database. Two good alternatives are xSQL Schema Compare and xSQL Data Compare.
Now, if your goal is to have only the database's schema under version control you can simply use xSQL Schema Compare to generate xSQL Snapshots of the schema and add these files in your version control. Then, to revert or update to a specific version, just compare the current version of the database with the snapshot for the destination version.
Also, if you want to have the data under version control as well, you can use xSQL Data Compare to generate change scripts for you database and add the .sql files in your version control. You could then execute these scripts to revert / update to any version you want. Keep in mind that for the 'revert' functionality you need to generate change scripts that, when executed, will make Version 3 the same as Version 2 and for the 'update' functionality, you need to generate change scripts that do the opposite.
Lastly, with some basic batch programming skills you can automate the whole process by using the command line versions of xSQL Schema Compare and xSQL Data Compare
Disclaimer: I'm affiliated to xSQL.
An alternative to version controlling your database is to use a version-controlled database, of which there are now several.
https://www.dolthub.com/blog/2021-09-17-database-version-control/
These products don't apply version control on top of another type of database -- they are their own database engines that support version control operations. So you need to migrate to them or start building on them in the first place.
I write one of them, DoltDB, which combines the interfaces of MySQL and Git. Check it out here:
https://github.com/dolthub/dolt

Resources