We use SQL Server 2000/2005 and Vault or SVN on most of our projects. I haven't found a decent solution for capturing database schema/proc changes in either source control system.
Our current solution is quite cumbersome and difficult to enforce (script out the object you change and commit it to the database).
We have a lot of ideas of how to tackle this problem with some custom development, but I'd rather install an existing tool (paid tools are fine).
So: how do you track your database code changes? Do you have any recommended tools?
Edit:
Thanks for all the suggestions. Due to time constraints, I'd rather not roll my own here. And most of the suggestions have the flaw that they require the dev to follow some procedure.
Instead, an ideal solution would monitor the SQL Database for changes and commit any detected changes to SCM. For example, if SQL Server had an add-on that could record any DML change with the user that made the change, then commit the script of that object to SCM, I'd be thrilled.
We talked internally about two systems:
1. In SQL 2005, use object permissions to restrict you from altering an object until you did a "checkout". Then, the checkin procedure would script it into the SCM.
2. Run a scheduled job to detect any changes and commit them (anonymously) to SCM.
It'd be nice if I could skip the user-action part and have the system handle all this automatically.
Use Visual studio database edition to script out your database. Works like a charm and you can use any Source control system, of course best if it has VS plugins. This tool has also a number of other useful features. Check them out here in this great blog post
http://www.vitalygorn.com/blog/post/2008/01/Handling-Database-easily-with-Visual-Studio-2008.aspx
or check out MSDN for the official documentation
Tracking database changes directly from SSMS is possible using various 3rd party tools. ApexSQL Source Control automatically scripts any database object that is included in versioning. Commits cannot be automatically performed by the tool. Instead, the user needs to choose which changes will be committed.
When getting changes from a repository, ApexSQL Source Control is aware of a SQL database referential integrity. Thus, it will create a synchronization scripts including all dependent objects that will be wrapped in a transactions so, either all changes will be applied in case no error is encountered, or none of the selected changes is applied. In any case, database integrity remains unaffected.
I have to say I think a visual studio database project is also a reasonable solution to the source control dilemma. If it's set up correctly you can run the scripts against the database from the IDE. If your script is old, get the latest, run it against the DB. Have a script that recreates all the objects as well if you need, new objects must be added to the this script as well by hand, but only once
I like every table, proc and function to be in it's own file.
One poor man's solution would be to add a pre-commit hook script that dumps out the latest db schema into a file and have that file committed to your SVN repository along with your code. Then, you can diff the db schema files from any revision.
I just commit the SQL-alter-Statement additional to the complete SQL-CreateDB-statement.
Rolling your own from scratch would not be very doable, but if you use a sql comparison tool like Redgate SQL Compare SDK to generate your change files for you it would not take very long to half-roll what you want and then just check those files into source control. I rolled something similar for myself to update changes from our development systems to our live systems in just a few hours.
In our environment, we never change the DB manually: all changes are done by scripts at release time, and the scripts are kept in the version control system. One important part of this procedure is to be sure that all scripts can be run again against the same DB the scripts are idempotent?) without loss of data. For example, if you add a column, make sure that you do nothing if the column is already there.
Your comment about "suggestions have the flaw that they require the dev to follow some procedure" is really a tell-tale. It's not a flaw, it's a feature. Version control helps developers in following procedures and makes the procedures less painful. If you don't want to follow procedures, you don't need version control.
In SQL2000 generate each object into it's own file, then check them all into your source control. Let your source control handle the change history.
In SQL 2005, you'll need to write a bit of code to generate all objects into separate files.
In one project I arranged by careful attention in the design that all the important data in the database can be automatically recreated from external places. At startup the application creates the database if it is missing, and populates it from external data sources, using a schema in the application source code (and hence versioned with the application). The database store name (a sqlite filename although most database managers allow multiple databases) includes a schema version, and we increase the schema version whenever we commit a schema change. This means when we restart the application to a new version with a different schema that a new database store is automatically created and populated. Should we have to revert a deployment to an old schema then the new run of the old version will be using the old database store, so we get to do fast downgrades in the event of trouble.
Essentially, the database acts like a traditional application heap, with the advantages of persistence, transaction safety, static typing (handy since we use Python) and uniqueness constraints. However, we don't worry at all about deleting the database and starting over, and people know that if they try some manual hack in the database then it will get reverted on the next deployment, much like hacks of a process state will get reverted on the next restart.
We don't need any migration scripts since we just switch database filename and restart the application and it rebuilds itself. It helps that the application instances are sharded to use one database per client. It also reduces the need for database backups.
This approach won't work if your database build from the external sources takes longer than you will allow the application to be remain down.
If you are using .Net and like the approach Rails takes with Migrations, then I would recommend Migrator.Net.
I found a nice tutorial that walks through setting it up in Visual Studio. He also provides a sample project to reference.
We developed a custom tool that updates our databases. The database schema is stored in a database-neutral XML file which is then read and processed by the tool. The schema gets stored in SVN, and we add appropriate commentary to show what was changed. It works pretty well for us.
While this kind of solution is definitely overkill for most projects, it certainly makes life easier at times.
Our dbas periodically check prod against what is in SVN and delete any objects not under source control. It only takes once before the devlopers never forget to put something in source control again.
We also do not allow anyone to move objects to prod without a script as our devs do not have prod rights this is easy to enforce.
In order to track all the change like insert update and delete there will be a lot of overhead for the SVN.
It is better to track only the ddl changes like (alter, drop, create) which changes the schema.
You can do this Schema tracking easily by creating a table and a trgger to insert data to that table.
Any time you want u can get the change status by querying from that table
There are a lots of example here and here
Related
I need to create scripts for creating or updating a database. The scripts are created from my test Database or from my source control.
The script needs to upgrade a database from any version of my application to the current version so it needs to be agnostic to what already exists in the database.
I do not have access to the databases that will be upgraded.
e.g.
If a table does not exist the script should create it.
If the table exists the script should check if all the columns exist (And check their types).
I wrote a lot of this checking code in C# as in i have an SQL create table script and the C# code checks if the table (and columns) exists before running the script.
My code is not production ready and i wanted to know what ready made solutions are out there.
I have no experience with frameworks that can do this.
Such an inquiry is off-topic for SO anyway.
But depending on your demands, it may not be too hard to implement something yourself.
One straightforward approach would be to work with incremental schema changes; basically just a chronological list of SQL scripts.
Never change or delete existing script (unless something really bad is in there).
Instead, just keep adding upgrade scripts for every new version.
Yes, 15 years later you will have accumulated 5,000 scripts.
Trust me, it will be the least of your problems.
To create a new database, just execute the full chain of scripts in chronological order.
For upgrades, there are two possibilities.
Keep a progress list in every database.
That is basically just a table containing the names of all scripts that have already been executed there.
To upgrade, just execute every script that is not in that list already. Add them to the list as you go.
Note: if necessary, this can be done with one or more auto-generated, deployable, static T-SQL scripts.
Make every script itself responsible for recognizing whether or not it needs to do anything.
For example, a 'create table' script checks if the table already exists.
I would recommend a combination of the two:
option #1 for new versions (as it scales a lot better than #2)
option #2 for existing versions (as it may be hard to introduce #1 retroactively on legacy production databases)
Depending on how much effort you will put in your upgrade scripts, the 'option #2' part may be able to fix some schema issues in any given database.
In other words, make sure you start off with scripts that are capable of bringing messy legacy databases back in line with the schema dictated by your application.
Future scripts (the 'option #1' part) have less to worry about; they should trust the work done by those early scripts.
No, this approach is not resistant against outside interference, like a rogue sysadmin.
It will not magically fix a messed-up schema.
It's an illusion to think you can do that automatically, without somebody analyzing the problem.
Even if you have a tool that will recreate every missing column and table, that will not bring back the data that used to be in there.
And if you are not interested in recovering data, then you might as well discard (part of) the database and start from scratch.
On the other hand, I would recommend to make the upgrade scripts 'idempotent'.
Running it once or running it twice should make no difference.
For example, use DROP TABLE IF EXISTS rather than DROP TABLE; the latter will throw an exception when executed again.
That way, in desperate times you may still be able to repair a database semi-automatically, simply by re-running everything.
If you are talking about Schema state, you can look at state-based deployment-tools instead of change-based. (not the official terminology)
You should look at these two tools
SQL Server Data Tools (Dacpac) data-tier-applications which is practically free
RedGate has an entire toolset for this https://www.red-gate.com/solutions/need/automate. which is licensed
The one thing to keep in mind with State based deployments is that you don't control how the database gets from one-state to another, with SSDT
For example a column-rename = drop and recreate that column, same for a table-rename.
In their defence they do have some protections and do tell you what is about to happen.
EDIT (Updating to address comment below)
It should not be a problem that you can't access the TargetDb while in development. You can still use the above tools provided you can use them (Dacpac/Redgate) tooling when you are deploying to the TargetDb.
If you are hoping to have a dynamic TSQL script that can update a target database in an unknown state. Then that is a recipe for failure/disaster. I do have some suggestions at the end for dealing with this.
The way I see it working is
Do your development using Dacpac/Redgate
Build your artefacts Dacpac / Redgate package
Copy artefact to the deployment server with tools
when doing deployments use the tools (Dacpac Powershell) or Redgate manually
If your only choice is a TSQL script, then the only option is extensive-defensive coding covering all possibilities.
Every object must have an existence check
Every property must have a state check
Every object/property must have a roll forward / roll backward script.
For example to sync a table
A Script to check the table exists, if not create it
A script to check each property of the table is in the correct state
check all columns and their data-types and script to update them to match
check defaults
check indexes, partitioning etc
Even with this, you might not be able to handle every scenario.
The work you are trying to do requires you start using a standard change control process.
Given the risk of data loss, and issues related to creation of columns in a specific sequence and the potential for column definitions to change.
I recommend you look at defining a base line version which you will manually have to upgrade each system to.
You can roll your own code, and use a schema version table, or use any one of the tools available such as redgate sql source control, visual studio database projects, dbup, or others.
I do not believe any tool will bring you from 0-1, however, once you baseline, any one of these tools will greatly facilitate your workflow.
Start with this article Get Your Database Under Version Control
Here are some tools that can help you?
Octopus Schema Migrations
Flyway By Redgate
Idera Database Change Management
SQL Server Data Tools
In the scope of responsible programming and versioning, I would like to start to version my database changes especially since I am developing on my database instance then moving it to production. I haven't found any thing that truly makes sense to me on how to do this. I am using Visual Studio 2010 Pro as my IDE. Is there a document that makes this process simple and able to detect changes to the database with relative ease? Or what should I change in my workflow to make this easier?
One way that I've successfully done this sort of thing in the past, is via Sql Source Control. Visual Studio does not offer this functionality for you.
Alternatively, you can use SSMS to generate the Database scripts for you and save it as a file; then you can check in the script. You would chose whether you generate the whole DB script in one file or whether you do it on an object by object basis. The syncing part will have to be done by you by executing your scripts in production. In conclusion a total nightmare.
Redgate also offers Sql Compare, which is great for syncing databases. Take a look at their products if you or your company can afford them.
We use our own DB solution in-house which brings all the tools required for proper DB versioning. While I realize that it may not be a perfect solution for everyone, I invite you to have a look at it (it is open-source): bsn ModuleStore
The versioning aspect is as follows: the tool can script out the SQL semi-automatically, and it does reformat the source code to be in an uniform format. The files will therefore always be identical for the same source, no matter of when and by whom something has been scripted; this therefore works nicely with non-locking source control systems (especially SVN, Git or Mercurial).
The reformat puts all statements in the same form (e.g. optional keywords such as AS, INNER, OUTER etc. are dealt with), scripts everything to the "dbo" schema (even if it was in a different one), puts all identifiers into the square braces ([something]), uppercases all reserved words, does the indentation etc.
Besides versioning, the runtime part of the tool can diff the running DB and the CREATE scripts (DB source code) and apply updates automatically for all non-destructive changes (e.g. updating indexes, constraints, views, stored procedures, triggers, custom types, new tables etc.). Destructuve changes have to be scriped manually (table changes which then usually require data transformations). The runtime will make sure that all updates are performed in a transaction and rollback if the resulting DB doesn't match the CREATE scripts, therefore you get the safety of knowing that the DB is exactly on the version required by the application, even if it has been tampered with manually.
Also, multiple "modules" can be used in a single database. Each module is stored as a schema and independent of other schemas, thereby making it possible to add or remove modules from one single DB, and avoiding the need to create multiple databases for different parts of the application. Also, the use of schemas to do this makes sure that there are no name collisions.
It may be worth noting that the toolset has no dependency to the SMO, it is autonomous.
Save Your Database scripts at SVN. Here is the Refernce How to use SVN Tortoise
OR
Save your database script at VSS. Here is the reference What is VSS ? How can we use that ?
In both cases you can keep track of the changes done so that in future you can check the history which in saved in the form of versions.
You can use Red Gate product also
EDIT
How do you pull out what what has changed?
Use comparison feature to check the changes made in the previous versions.
How do I apply the changes to the live database server?
Download the latest file from server.
I hope you are not using the Drop statements for the Table in your consolidated script. As it will delete all records from the table.
Drop statements will take place for Stored Pro, View, Function etc.
Please note that you have to run the complete latest database script file on the production server with below mentioned action plans
1. Remove Drop Statement for Schema DDL
2. Add Drop/Create Statements for Stored Proc/Views
3. Include Alter statements DML of schema.
Hope this will definitely help you.
I am using github for maintaining versions and code synchronization.
We are team of two and we are located at different places.
How can we make sure that our databases are synchronized.
Update:--
I am rails developer. But these days i m working on drupal projects (where database is the center of variations). So i want to make sure that team must have a synchronized database. Also the values in various tables.
I need something which keep our data values synchronized.
Centralized database is a good solution. But things get disturbed when someone works offline
if you use visual studio then you can script your database tables, views, stored procedures and functions as .sql files from a database solution and then check those into version control as well - its what i currently do at my workplace
In you dont use visual studio then you can still script your sql as .sql files [but with more work] and then version control them as necessary
Have a look at Red Gate SQL Source Control - http://www.red-gate.com/products/SQL_Source_Control/
To be honest I've never used it, but their other software is fantastic. And if all you want to do is keep the DB schema in sync (rather than full source control) then I have used their SQL Compare product very succesfully in the past.
(ps. I don't work for them!)
You can use Sql Source Control together with Sql Data Compare to source control both: schema and data. Here is an article from redgate: Source controlling data.
These are some of the possibilities.
Using the same database. Set-up a central database where everybody can connect to. This way you are sure everybody uses the same database all the time.
After every change, export the database and commit it to the VCS. This option requires discipline and manual labor.
Use some kind of other definition of the schema. For example, Doctrine for php has the ability to build the database from a yaml definition which can be stored in the vcs. This can be easier automated then point 2.
Use some other software/script which updates the database.
I feel your pain. I had terrible trouble getting SQL Server to play nice with SVN. In the end I opted for a shared database solution. Every day I run an extensive script to backup all our schema definitions (specifically stored procedures) for version control into text files. Due to the limited number of changes this works well.
I now use this technique for our major project and personal projects too. The only negative is that it relies on being connected all the time. The other answers suggest that full database versioning is very time consuming and I tend to agree. For "live" upgrades we use the Red Gate tools, they do both schema and data compare and it works very well.
http://www.red-gate.com/products/SQL_Data_Compare/. We were using this tool for keeping databases in sync in our company. Later we had some specific demands so we had to write our own code for synchronization. Depends how complex is you database and how much changes is happening. It is much simpler if you have time when no one is working and you can lock database for syncronization.
Check out OffScale DataGrove.
This product tracks changes to the entire DB - schema and data. You can tag versions in any point in time, and return to older states of the DB with a simple command. It also allows you to create virtual, separate, copies of the same database so each team member can have his own separate DB. All the virtual copies are tracked into the same repository so it's super-easy to revert your DB to someone else's version (you simply check-out their version, just like you do with your source control). This means all your DBs can always be synchronized.
Regarding a centralized DB - just like you don't want to work on the same source code, you don't want to be working on the same DB. It means you'll constantly break each other's code and builds each time someone changes something in the DB.
I suggest that you go with a separate DB for each developer, and sync them using DataGrove.
Disclaimer - I work at OffScale :-)
Try Wizardby. This is my personal project, but I've used it in my several previous jobs with great deal of success.
Basically, it's a tool which lets you specify all changes to your database schema in a database-independent manner and then apply these changes to all your databases.
I have read lots of posts about the importance of database version control. However, I could not find a simple solution how to check if database is in state that it should be.
For example, I have a databases with a table called "Version" (version number is being stored there). But database can be accessed and edited by developers without changing version number. If for example developer updates stored procedure and does not update Version database state is not in sync with version value.
How to track those changes? I do not need to track what is changed but only need to check if database tables, views, procedures, etc. are in sync with database version that is saved in Version table.
Why I need this? When doing deployment I need to check that database is "correct". Also, not all tables or other database objects should be tracked. Is it possible to check without using triggers? Is it possible to be done without 3rd party tools? Do databases have checksums?
Lets say that we use SQL Server 2005.
Edited:
I think I should provide a bit more information about our current environment - we have a "baseline" with all scripts needed to create base version (includes data objects and "metadata" for our app). However, there are many installations of this "base" version with some additional database objects (additional tables, views, procedures, etc.). When we make some change in "base" version we also have to update some installations (not all) - at that time we have to check that "base" is in correct state.
Thanks
You seem to be breaking the first and second rule of "Three rules for database work". Using one database per developer and a single authoritative source for your schema would already help a lot. Then, I'm not sure that you have a Baseline for your database and, even more important, that you are using change scripts. Finally, you might find some other answers in Views, Stored Procedures and the Like and in Branching and Merging.
Actually, all these links are mentioned in this great article from Jeff Atwood: Get Your Database Under Version Control. A must read IMHO.
We use DBGhost to version control the database. The scripts to create the current database are stored in TFS (along with the source code) and then DBGhost is used to generate a delta script to upgrade an environment to the current version. DBGhost can also create delta scripts for any static/reference/code data.
It requires a mind shift from the traditional method but is a fantastic solution which I cannot recommend enough. Whilst it is a 3rd party product it fits seamlessly into our automated build and deployment process.
I'm using a simple VBScript file based on this codeproject article to generate drop/create scripts for all database objects. I then put these scripts under version control.
So to check whether a database is up-to-date or has changes which were not yet put into version control, I do this:
get the latest version of the drop/create scripts from version control (subversion in our case)
execute the SqlExtract script for the database to be checked, overwriting the scripts from version control
now I can check with my subversion client (TortoiseSVN) which files don't match with the version under version control
now either update the database or put the modified scripts under version control
You have to restrict access to all databases and only give developers access to a local database (where they develop) and to the dev server where they can do integration. The best thing would be for them to only have access to their dev area locally and perform integration tasks with an automated build. You can use tools like redgates sql compare to do diffs on databases. I suggest that you keep all of your changes under source control (.sql files) so that you will have a running history of who did what when and so that you can revert db changes when needed.
I also like to be able to have the devs run a local build script to re initiate their local dev box. This way they can always roll back. More importantly they can create integration tests that tests the plumbing of their app (repository and data access) and logic stashed away in a stored procedure in an automated way. Initialization is ran (resetting db), integration tests are ran (creating fluff in the db), reinitialization to put db back to clean state, etc.
If you are an SVN/nant style user (or similar) with a single branch concept in your repository then you can read my articles on this topic over at DotNetSlackers: http://dotnetslackers.com/articles/aspnet/Building-a-StackOverflow-inspired-Knowledge-Exchange-Build-automation-with-NAnt.aspx and http://dotnetslackers.com/articles/aspnet/Building-a-StackOverflow-inspired-Knowledge-Exchange-Continuous-integration-with-CruiseControl-NET.aspx.
If you are a perforce multi branch sort of build master then you will have to wait till I write something about that sort of automation and configuration management.
UPDATE
#Sazug: "Yep, we use some sort of multi branch builds when we use base script + additional scripts :) Any basic tips for that sort of automation without full article?" There are most commonly two forms of databases:
you control the db in a new non-production type environment (active dev only)
a production environment where you have live data accumulating as you develop
The first set up is much easier and can be fully automated from dev to prod and to include rolling back prod if need be. For this you simply need a scripts folder where every modification to your database can be maintained in a .sql file. I don't suggest that you keep a tablename.sql file and then version it like you would a .cs file where updates to that sql artifact is actually modified in the same file over time. Given that sql objects are so heavily dependent on each other. When you build up your database from scratch your scripts may encounter a breaking change. For this reason I suggest that you keep a separate and new file for each modification with a sequence number at the front of the file name. For example something like 000024-ModifiedAccountsTable.sql. Then you can use a custom task or something out of NAntContrib or an direct execution of one of the many ??SQL.exe command line tools to run all of your scripts against an empty database from 000001-fileName.sql through to the last file in the updateScripts folder. All of these scripts are then checked in to your version control. And since you always start from a clean db you can always roll back if someones new sql breaks the build.
In the second environment automation is not always the best route given that you might impact production. If you are actively developing against/for a production environment then you really need a multi-branch/environment so that you can test your automation way before you actually push against a prod environment. You can use the same concepts as stated above. However, you can't really start from scratch on a prod db and rolling back is more difficult. For this reason I suggest using RedGate SQL Compare of similar in your build process. The .sql scripts are checked in for updating purposes but you need to automate a diff between your staging db and prod db prior to running the updates. You can then attempt to sync changes and roll back prod if problems occur. Also, some form of a back up should be taken prior to an automated push of sql changes. Be careful when doing anything without a watchful human eye in production! If you do true continuous integration in all of your dev/qual/staging/performance environments and then have a few manual steps when pushing to production...that really isn't that bad!
First point: it's hard to keep things in order without "regulations".
Or for your example - developers changing anything without a notice will bring you to serious problems.
Anyhow - you say "without using triggers".
Any specific reason for this?
If not - check out DDL Triggers. Such triggers are the easiest way to check if something happened.
And you can even log WHAT was going on.
Hopefully someone has a better solution than this, but I do this using a couple methods:
Have a "trunk" database, which is the current development version. All work is done here as it is being prepared to be included in a release.
Every time a release is done:
The last release's "clean" database is copied to the new one, eg, "DB_1.0.4_clean"
SQL-Compare is used to copy the changes from trunk to the 1.0.4_clean - this also allows checking exactly what gets included.
SQL Compare is used again to find the differences between the previous and new releases (changes from DB_1.0.4_clean to DB_1.0.3_clean), which creates a change script "1.0.3 to 1.0.4.sql".
We are still building the tool to automate this part, but the goal is that there is a table to track every version the database has been at, and if the change script was applied. The upgrade tool looks for the latest entry, then applies each upgrade script one-by-one and finally the DB is at the latest version.
I don't have this problem, but it would be trivial to protect the _clean databases from modification by other team members. Additionally, because I use SQL Compare after the fact to generate the change scripts, there is no need for developers to keep track of them as they go.
We actually did this for a while, and it was a HUGE pain. It was easy to forget, and at the same time, there were changes being done that didn't necessarily make it - so the full upgrade script created using the individually-created change scripts would sometimes add a field, then remove it, all in one release. This can obviously be pretty painful if there are index changes, etc.
The nice thing about SQL compare is the script it generates is in a transaction -and it if fails, it rolls the whole thing back. So if the production DB has been modified in some way, the upgrade will fail, and then the deployment team can actually use SQL Compare on the production DB against the _clean db, and manually fix the changes. We've only had to do this once or twice (damn customers).
The .SQL change scripts (generated by SQL Compare) get stored in our version control system (subversion).
If you have Visual Studio (specifically the Database edition), there is a Database Project that you can create and point it to a SQL Server database. The project will load the schema and basically offer you a lot of other features. It behaves just like a code project. It also offers you the advantage to script the entire table and contents so you can keep it under Subversion.
When you build the project, it validates that the database has integrity. It's quite smart.
On one of our projects we had stored database version inside database.
Each change to database structure was scripted into separate sql file which incremented database version besides all other changes. This was done by developer who changed db structure.
Deployment script checked against current db version and latest changes script and applied these sql scripts if necessary.
Firstly, your production database should either not be accessible to developers, or the developers (and everyone else) should be under strict instructions that no changes of any kind are made to production systems outside of a change-control system.
Change-control is vital in any system that you expect to work (Where there is >1 engineer involved in the entire system).
Each developer should have their own test system; if they want to make changes to that, they can, but system tesing should be done on a more controlled, system test system which has the same changes applied as production - if you don't do this, you can't rely on releases working because they're being tested in an incompatible environment.
When a change is made, the appropriate scripts should be created and tested to ensure that they apply cleanly on top of the current version, and that the rollback works*
*you are writing rollback scripts, right?
I agree with other posts that developers should not have permissions to change the production database. Either the developers should be sharing a common development database (and risk treading on each others' toes) or they should have their own individual databases. In the former case you can use a tool like SQL Compare to deploy to production. In the latter case, you need to periodically sync up the developer databases during the development lifecycle before promoting to production.
Here at Red Gate we are shortly going to release a new tool, SQL Source Control, designed to make this process a lot easier. We will integrate into SSMS and enable the adding and retrieving objects to and from source control at the click of a button. If you're interested in finding out more or signing up to our Early Access Program, please visit this page:
http://www.red-gate.com/Products/SQL_Source_Control/index.htm
I have to agree with the rest of the post. Database access restrictions would solve the issue on production. Then using a versioning tool like DBGhost or DVC would help you and the rest of the team to maintain the database versioning
As you develop an application database changes inevitably pop up. The trick I find is keeping your database build in step with your code. In the past I have added a build step that executed SQL scripts against the target database but that is dangerous in so much as you could inadvertanly add bogus data or worse.
My question is what are the tips and tricks to keep the database in step with the code? What about when you roll back the code? Branching?
Version numbers embedded in the database are helpful. You have two choices, embedding values into a table (allows versioning multiple items) that can be queried, or having an explictly named object (such as a table or somesuch) you can test for.
When you release to production, do you have a rollback plan in the event of unexpected catastrophe? If you do, is it the application of a schema rollback script? Use your rollback script to rollback the database to a previous code version.
You should be able to create your database from scratch into a known state.
While being able to do so is helpful (especially in the early stages of a new project), many (most?) databases will quickly become far too large for that to be possible. Also, if you have any BLOBs then you're going to have problems generating SQL scripts for your entire database.
I've definitely been interested in some sort of DB versioning system, but I haven't found anything yet. So, instead of a solution, you'll get my vote. :-P
You really do want to be able to take a clean machine, get the latest version from source control, build in one step, and run all tests in one step. Making this fast makes you produce good software faster.
Just like external libraries, database configuration must also be in source control.
Note that I'm not saying that all your live database content should be in the same source control, just enough to get to a clean state. (Do back up your database content, though!)
Define your schema objects and your reference data in version-controlled text files. For example, you can define the schema in Torque format, and the data in DBUnit format (both use XML). You can then use tools (we wrote our own) to generate the DDL and DML that take you from one version of your app to another. Our tool can take as input either (a) the previous version's schema & data XML files or (b) an existing database, so you are always able to get a database of any state into the correct state.
I like the way that Django does it. You build models and the when you run a syncdb it applies the models that you have created. If you add a model you just need to run syncdb again. This would be easy to have your build script do every time you made a push.
The problem comes when you need to alter a table that is already made. I do not think that syncdb handles that. That would require you to go in and manually add the table and also add a property to the model. You would probably want to version that alter statement. The models would always be under version control though, so if you needed to you could get a db schema up and running on a new box without running the sql scripts. Another problem with this is keeping track of static data that you always want in the db.
Rails migration scripts are pretty nice too.
A DB versioning system would be great, but I don't really know of such a thing.
While being able to do so is helpful (especially in the early stages of a new project), many (most?) databases will quickly become far too large for that to be possible. Also, if you have any BLOBs then you're going to have problems generating SQL scripts for your entire database.
Backups and compression can help you there. Sorry - there's no excuse not to be able to get a a good set of data to develop against. Even if it's just a sub-set.
Put your database developments under version control. I recommend to have a look at neXtep designer :
http://www.nextep-softwares.com/wiki
It is a free GPL product which offers a brand new approach to database development and deployment by connecting version information with a SQL generation engine which could automatically compute any upgrade script you need to upgrade any version of your database into another. Any existing database could be version controlled by a reverse synchronization.
It currently supports Oracle, MySql and PostgreSql. DB2 support is under development. It is a full-featured database development environment where you always work on version-controlled elements from a repository. You can publish your updates by simple synchronization during development and you can generate exportable database deliveries which you will be able to execute on any targetted database through a standalone installer which validates the versions, performs structural checks and applies the upgrade scripts.
The IDE also offers you SQL editors, dependency management, support for modular database model components, data model diagrams, SQL clients and much more.
All the documentation and concepts could be found in the wiki.