How to keep history of SQL Server stored procedure revisions - sql-server

Note: I am not asking about full version control.
Is there any way automatically to keep a history of stored procedures on SQL Server.
Similar to how Google Docs automatically keeps a history of versions of documents and Wikipedia automatically keeps a history of versions of articles.
I don't want users updating stored procedures to have also to maintain a repository of stored procedures. This is too much work and people won't do it.
Hopefully this is something I can turn on in SQL Server...
(And by stored procedures really I mean functions, triggers, etc. Basically everything under Programmability.)

You could run RedGate SQL Compare every hour to write all definitions to a disk. If the same job commits that directory to source control, you get an hourly history of the database.
You can also use RedGate SQL Source Control, but that requires everyone to commit manually.

It looks like this might help: SQL Server stored procedure restores to previous one
But surely this is built in or can be implemented via some plugin that already exists...

Related

How to synchronize stored procedure files & codes within a team

I have been studying on this matter for few hours and still studying on it. I'm actually getting lots of info and sources that are not relevant to what I need as I'm not too sure what to search for in google.
In my company, we use SVN to update and commit our source code and allow each developer in the team who work at the same project to get latest code from each other. This practice works fine without any issue so far.
The only problem that concerns me is how to synchronize stored procedures files in the similar method?
In the past, we face issue like forgetting to get the latest stored procedures from others and we don't even know who change what stored procedure, and deployed the files that without other people latest changes to the client. So our only workable manual method is to make sure we send our latest stored procedure physical files to everyone to update and make sure they remember to update, which is not so practical and unsafe, because people do forget sometimes.
We thought about SVN, but not really work for us because we don't stored our stored procedures in the IDE, and it's not a good concept to store in our IDE as stored procedure is not really code file.
Highly appreciate if someone could suggest some good practice to share these type of files across people who work in the same project.
Just for information, the IDE we working at is visual studio and we code in c# .Net.
My team is similar to yours. Actually we manage all stored procedures and table schema SQL files in a folder which is also saved in TFS/SVN. Every time when a developer want to change a store procedure, he should get the latest source stored procedure file and edit it in SQL Server Management Studio to edit it. Then he need to commit the stored procedure files with his code changes. It is also the same when he wants to delete or add stored procedure files.
Then after each deployment, when a developer finds there is db error blocking his ongoing works, he will find the stored procedure files in the source and run the most recently modified SQL files. The blocking issue should be solved.
We're storing all procedures as .sql files in the same folder with the sources that uses it. The procedures are installed automatically to correct database with each build and a version / build number if added to the end of the procedure so that different versions of the same application can be used with the same database (for testing environments).
Having your procedures in version control helps a lot when you have to track down what has been changed or who changed it and of course to get the correct versions installed at the correct time.
In addition to procedures we also store other objects like views, functions, triggers, constraints etc. into version control. You could store tables too, but for that we have a special handling because tables can't be re-created, it needs alter table clauses to be executed.
We're not using SVN, but I would assume the same basic idea would work with it too.

Version Control / Code repository for SqlServer stored procedures and views

Any recommendations for a database equivalent of SVN or GIT for use to check stored procedures and views out and in, and provide version control?
I am interested in open source / free solutions, but if you have a commercial solution, preferably low cost, please let me know also.
I have looked at answers here that talk about adding a entire database backup to a code repository, or comparing records, but that is not what I am talking about here.
I would like to check out a stored procedure, and check it back in knowing that no one else has touched it in the meanwhile.
I would like to see what changes have been made to the stored procedures and views since the last time I worked on the data access layer (even if there is no impact on my code).
Our company uses Visual Studio Database Edition to manage our database schema (schema, not data). At this point, we would be lost without it. Our entire database schema is managed by Microsoft TFS, and is our "source of truth" with regards to what our schema looks like. It does much more than source control as well, including database validation, test data generation, refactoring etc.
Great tool.
We use liquibase which you can find here:
http://www.liquibase.org/
It is open and extensible. Published under the Apache 2.0 license.
and here is a tutorial on managing database schema changes with liquibase:
http://bytefilia.com/managing-database-schema-changes-liquibase-existing-schema/
You should have the repository in source control be the only source of the truth, with a periodic refresh of the database procedures (Continuous Integration) from the source control system. The user with write/create permissions for the stored procedures should be the one CI service runs under. This is the only way you can ensure no one is adding\changing objects in db rather than in the source of the truth (source control)
You can set up liquibase with SVN or GIT, and also any JDBC db, so SQL Server or most others as well.
Maybe not entirely what you had in mind - but it's worth reading this article:
http://www.codeproject.com/KB/architecture/Database_CI.aspx
or googling "continuous database integration".
Broadly speaking, as a schema can be defined in text, any source code repository can be used to store it. What you probably really need is to know which version you're currently looking at, whether it's older or newer than the one in production, and what changes have happened when. The TFS solution gives you a lot of that, but you can also roll your own - though you'll probably need to script the way database changes are managed in your various environments.

Database source control vs. schema change scripts

Building and maintaining a database that is then deplyed/developed further by many devs is something that goes on in software development all the time. We create a build script, and maintain further update scripts that get applied as the database grows over time. There are many ways to manage this, from manual updates to console apps/build scripts that help automate these processes.
Has anyone who has built/managed these processes moved over to a Source Control solution for database schema management? If so, what have they found the best solution to be? Are there any pitfalls that should be avoided?
Red Gate seems to be a big player in the MSSQL world and their DB source control looks very interesting:
http://www.red-gate.com/products/solutions_for_sql/database_version_control.htm
Although it does not look like it replaces the (default) data* management process, so it only replaces half the change management process from my pov.
(when I'm talking about data, I mean lookup values and that sort of thing, data that needs to be deployed by default or in a DR scenario)
We work in a .Net/MSSQL environment, but I'm sure the premise is the same across all languages.
Similar Questions
One or more of these existing questions might be helpful:
The best way to manage database changes
MySQL database change tracking
SQL Server database change workflow best practices
Verify database changes (version-control)
Transferring changes from a dev DB to a production DB
tracking changes made in database structure
Or a search for Database Change
I look after a data warehouse developed in-house by the bank where I work. This requires constant updating, and we have a team of 2-4 devs working on it.
We are fortunate because there is only the one instance of our "product", so we do not have to cater for deploying to multiple instances which may be at different versions.
We keep a creation script file for each object (table, view, index, stored procedure, trigger) in the database.
We avoid the use of ALTER TABLE whenever possible, preferring to rename a table, create the new one and migrate the data over. This means that we don't have to look through a history of ALTER scripts - we can always see the up to date version of every table by looking at its create script. The migration is performed by a separate migration script - this can be partly auto-generated.
Each time we do a release, we have a script which runs the create scripts / migration scripts in the appropriate order.
FYI: We use Visual SourceSafe (yuck!) for source code control.
I've been looking for a SQL Server source control tool - and came across a lot of premium versions that do the job - using SQL Server Management Studio as a plugin.
LiquiBase is a free one but i never quite got it working for my needs.
There is another free product out there though that works stand along from SSMS and scripts out objects and data to flat file.
These objects can then be pumped into a new SQL Server instance which will then re-create the database objects.
See gitSQL
Maybe you're asking for LiquiBase?

How do you track database changes in source control?

We use SQL Server 2000/2005 and Vault or SVN on most of our projects. I haven't found a decent solution for capturing database schema/proc changes in either source control system.
Our current solution is quite cumbersome and difficult to enforce (script out the object you change and commit it to the database).
We have a lot of ideas of how to tackle this problem with some custom development, but I'd rather install an existing tool (paid tools are fine).
So: how do you track your database code changes? Do you have any recommended tools?
Edit:
Thanks for all the suggestions. Due to time constraints, I'd rather not roll my own here. And most of the suggestions have the flaw that they require the dev to follow some procedure.
Instead, an ideal solution would monitor the SQL Database for changes and commit any detected changes to SCM. For example, if SQL Server had an add-on that could record any DML change with the user that made the change, then commit the script of that object to SCM, I'd be thrilled.
We talked internally about two systems:
1. In SQL 2005, use object permissions to restrict you from altering an object until you did a "checkout". Then, the checkin procedure would script it into the SCM.
2. Run a scheduled job to detect any changes and commit them (anonymously) to SCM.
It'd be nice if I could skip the user-action part and have the system handle all this automatically.
Use Visual studio database edition to script out your database. Works like a charm and you can use any Source control system, of course best if it has VS plugins. This tool has also a number of other useful features. Check them out here in this great blog post
http://www.vitalygorn.com/blog/post/2008/01/Handling-Database-easily-with-Visual-Studio-2008.aspx
or check out MSDN for the official documentation
Tracking database changes directly from SSMS is possible using various 3rd party tools. ApexSQL Source Control automatically scripts any database object that is included in versioning. Commits cannot be automatically performed by the tool. Instead, the user needs to choose which changes will be committed.
When getting changes from a repository, ApexSQL Source Control is aware of a SQL database referential integrity. Thus, it will create a synchronization scripts including all dependent objects that will be wrapped in a transactions so, either all changes will be applied in case no error is encountered, or none of the selected changes is applied. In any case, database integrity remains unaffected.
I have to say I think a visual studio database project is also a reasonable solution to the source control dilemma. If it's set up correctly you can run the scripts against the database from the IDE. If your script is old, get the latest, run it against the DB. Have a script that recreates all the objects as well if you need, new objects must be added to the this script as well by hand, but only once
I like every table, proc and function to be in it's own file.
One poor man's solution would be to add a pre-commit hook script that dumps out the latest db schema into a file and have that file committed to your SVN repository along with your code. Then, you can diff the db schema files from any revision.
I just commit the SQL-alter-Statement additional to the complete SQL-CreateDB-statement.
Rolling your own from scratch would not be very doable, but if you use a sql comparison tool like Redgate SQL Compare SDK to generate your change files for you it would not take very long to half-roll what you want and then just check those files into source control. I rolled something similar for myself to update changes from our development systems to our live systems in just a few hours.
In our environment, we never change the DB manually: all changes are done by scripts at release time, and the scripts are kept in the version control system. One important part of this procedure is to be sure that all scripts can be run again against the same DB the scripts are idempotent?) without loss of data. For example, if you add a column, make sure that you do nothing if the column is already there.
Your comment about "suggestions have the flaw that they require the dev to follow some procedure" is really a tell-tale. It's not a flaw, it's a feature. Version control helps developers in following procedures and makes the procedures less painful. If you don't want to follow procedures, you don't need version control.
In SQL2000 generate each object into it's own file, then check them all into your source control. Let your source control handle the change history.
In SQL 2005, you'll need to write a bit of code to generate all objects into separate files.
In one project I arranged by careful attention in the design that all the important data in the database can be automatically recreated from external places. At startup the application creates the database if it is missing, and populates it from external data sources, using a schema in the application source code (and hence versioned with the application). The database store name (a sqlite filename although most database managers allow multiple databases) includes a schema version, and we increase the schema version whenever we commit a schema change. This means when we restart the application to a new version with a different schema that a new database store is automatically created and populated. Should we have to revert a deployment to an old schema then the new run of the old version will be using the old database store, so we get to do fast downgrades in the event of trouble.
Essentially, the database acts like a traditional application heap, with the advantages of persistence, transaction safety, static typing (handy since we use Python) and uniqueness constraints. However, we don't worry at all about deleting the database and starting over, and people know that if they try some manual hack in the database then it will get reverted on the next deployment, much like hacks of a process state will get reverted on the next restart.
We don't need any migration scripts since we just switch database filename and restart the application and it rebuilds itself. It helps that the application instances are sharded to use one database per client. It also reduces the need for database backups.
This approach won't work if your database build from the external sources takes longer than you will allow the application to be remain down.
If you are using .Net and like the approach Rails takes with Migrations, then I would recommend Migrator.Net.
I found a nice tutorial that walks through setting it up in Visual Studio. He also provides a sample project to reference.
We developed a custom tool that updates our databases. The database schema is stored in a database-neutral XML file which is then read and processed by the tool. The schema gets stored in SVN, and we add appropriate commentary to show what was changed. It works pretty well for us.
While this kind of solution is definitely overkill for most projects, it certainly makes life easier at times.
Our dbas periodically check prod against what is in SVN and delete any objects not under source control. It only takes once before the devlopers never forget to put something in source control again.
We also do not allow anyone to move objects to prod without a script as our devs do not have prod rights this is easy to enforce.
In order to track all the change like insert update and delete there will be a lot of overhead for the SVN.
It is better to track only the ddl changes like (alter, drop, create) which changes the schema.
You can do this Schema tracking easily by creating a table and a trgger to insert data to that table.
Any time you want u can get the change status by querying from that table
There are a lots of example here and here

Automating DB Object Migrations from Source Control

I'm looking for some "Best Practices" for automating the deployment of Stored Procedures/Views/Functions/Table changes from source control. I'm using StarTeam & ANT so the labeling is taken care of; what I am looking for is how some of you have approached automating the pull of these objects from source - not necessarily StarTeam.
I'd like to end up with one script that can then be executed, checked in, and labeled.
I'm NOT asking for anyone to write that - just some ideas or approaches that have (or haven't) worked in the past.
I'm trying to clean up a mess and want to make sure I get this as close to "right" as I can.
We are storing the tables/views/functions etc. in individual files in StarTeam and our DB is SQL 2K5.
We use SQL Compare from redgate (http://www.red-gate.com/).
We have a production database, a development database and each developer has their own database.
The development database is synchronised with the changes a developer has made to their database when they check in their changes.
The developer also checks in a synchronisation script and a comparison report generated by SQL Compare.
When we deploy our application we simply synchronise the development database with the production database using SQL Compare.
This works for us because our application is for in-house use only. If this isn't your scenario then I would look at SQL Packager (also from redgate).
I prefer to separate views, procedures, and triggers (objects that can be re-created at will) from tables. For views, procedures, and triggers, just write a job that will check them out and re-create the latest.
For tables, I prefer to have a database version table with one row. Use that table to determine what new updates have not been applied. Then each update is applied and the version number is updated. If an update fails, you have only that update to check and you can re-run know that the earlier updates will not happen again.

Resources