We're developing an aspx project with Visual Studio 2010 Professional, SQL Server 2008 R2 and Team Foundation Server 2010. Since the development is being carried out in multiple offices, each developer has their own local instances of the databases.
I want to bring these multiple databases under source control (or at least the schemas of the DB, structure and stored procedures - data doesn't matter to me). My preferred approach is to add database projects to the VS solution, which is already source controlled in TFS. Any changes will be distributed by TFS, and can be deployed locally.
The problem I'm having is that the database projects contain a reference to a local database instance (server & name). When someone gets the latest version of my changes, they will have a reference to my local DB instance (which is different to their local DB instance). They would need to change the DB details (thus checking the dbproj out) in order to get my updates.
So, is there any way that the database server & name can be left out of source control while the schemas remain under source control? Any help would be much appreciated!
I'm not sure if you can. However, you could use an alias, so all of the developers use a database on their local machine, but referenced by the same alias.
Take a look at: http://www.mssqltips.com/sqlservertip/1620/how-to-setup-and-use-a-sql-server-alias/ for how to set an alias up.
That way you can separate the database from the connection details.
I'm involved in developing a unique enforced database source control solution called DBmaestro TeamWork.
It has a plugin to SSMS which allows the developer to work directly on the database objects (change their working environment), run their tests and then perform Check-In which reads the metadata (tables' structure, procedures, functions, views etc.) to the version control repository.
With the Impact Analysis it is easy to merge changes from different databases to a single database.
The impact analysis algorithm perform 3-way analysis (not just a simple compare & sync) to identify changes origin from developerA which should not be reverted when developer merge his changes and it ignores the database name when running the impact analysis or generating the delta script.
Related
For example, table T has 3 columns, id, name, city. Now, I add a new column country in the testing environment. I want to sync this change (database structure or scheme) instead of executing this script again in the staging and production environments.
I would probably suggest that this isn’t the pattern I would go with. Linking your environments together like that is bound to cause issues.
In the past I have use SQL Server database projects in Visual Studio. These projects allow you to define the structure of your database in SQL syntax and a nice GUI.
Microsoft article showing how to create one
This means you can put the project into a git repo and then create a deployment pipeline to promote the changes through the environment stages.
The compiled output of the project is called a dacpac and the tooling compares your target db with the structure in the dacpac and makes it match. You can control how it does that through config e.g. don’t drop tables, etc.
Microsoft article showing deployment steps
Overall this will give you a more refined control over the database schema.
I work in a team of developers and we currently manage our SQL Server database schema (tables, stored procedures, user defined types, etc) through TFS and Visual Studio using a database project. We keep our local development copies of the database in sync using the Schema Compare tools in Visual Studio.
I'm currently setting up partitioning on a couple of huge tables of data which has resulted in 500+ FileGroups & Files based partly on what data we want to be on SSDs vs. spinning HDDs.
My question is, does anyone have a suggestion or experience on how to manage the database schema in TFS such that each developer doesn't have to setup the 500+ FileGroups/Files on each development machine?
The reason I want to avoid this is because
On our development machines we will only have a small amount of data
loaded based on disk space.
We plan to have a maintenance job on our production server to move data from SSD partitions to HDD partitions based on age. This means our production partitioning function won't match our development machines for very long anyway.
First, for your situation you can try to use server workspace. When you need to modify the files or projects, you just need to check out them.
When you use a server workspace, Visual Studio keeps only one copy of
each file. This can significantly reduce disk space usage and
improve performance when you have a lot of items. We recommend that
you use a server workspace if:
Your workspace contains more than 100,000 items.
You want to use Visual Studio 2010 or earlier versions to work with the workspace.
You need to use the Enable get latest on check-out option.
I'm not sure how did you handle the a database project. About how to put an existing database under source control, which consists of the following steps:
You create a database project.
You connect to an existing database.
You import the database schema from the existing database into the
database project.
You review the results that are shown in the database project.
You put the database project and its contents under version control.
You can also use a SSDT Database Project For Your Data Warehouse. Below is an example of how you could structure your database project (am only showing a few tables and views in the screen shots for brevity). You don't have to structure it this way, but in this project it's sorted first by schema, then by object type (table, view, etc), then by object (table name and its DDL, etc).
More detail info please refer this blog:Why You Should Use a SSDT Database Project For Your Data Warehouse
We originally dismissed using database projects in conjunction with TFS as our solution for our deployment and soucecontrol needs. However, in the interest of thoroughness, I'm exploring and prototyping it.
I've set up my database project (with add to source control checked). I've checked in the changes. Now, where do you develop from?
I've tried ...
connecting to the remote development server to make changes
syncing schema to (localdb)\Projects and making changes there
directly in the Source Control Explorer
With option 1 and 2 I don't see an automated way to add code to source control. Am I suppose to be working in the Source Control Explorer? (this seems a little silly)... Is there a way to commit the entire solution to source control? My apologies in advance, I'm a database developer and this concept of a "solution" is very foreign to me.
Also there were a lot of chatter about Visual Studios doing a lot of ugly things in the back ground that turned a lot of development shops off of database projects. Can someone share your experiences with me? Some of the pitfalls and gotchas.
And yes, we have looked at Redgate SourceControl (very nice tool).
Generally people do one of two things:
Develop in Visual Studio, via the Solution Explorer. Just open the project like you would any other project, add tables, indexes, etc. You even get the same GUI for editing DB objects as you get in SSMS. All changes will automatically be added to TFS Pending changes (just like any other code change), and can be checked in when you're ready.
Deploy the latest DB (using Publish in VS) to any SQL Server, make your changes in SSMS, then do a Schema Compare in Visual Studio to bring your changes back into your DB project so they can be checked into TFS.
I've been using DB projects for many years and I LOVE them! Every developer I've introduced them to, refuses to develop without them from that point on.
I'm going to explain you briefly how we use DB projects with TFS.
We basically have one DB already done and if we require any changes or new tables we create them or alter them directly in SQL Server (each developer has its own dev SQL Server).
Then in VS from the SQL Server Object Explorer we drag the tables we want into the DB project so when we check in the changes, every user in TFS would be able to get them and then publish that project that will generate and execute a script into the DB.
This is the way we use to develop when we need to add specific tables or records to the DB so we don't have to send emails with scripts or have them stored in an specific location (even with source control). This way we can get latest version of the project and publish it to ensure we have the latest DB version although it requires the user (who made the changes) to add them to the DB project.
Other way could be to do all the changes (and can be done without any problem) directly in the DB project and then publish it. That one would be a more right way to do it so you do all the changes directly in a source controlled project, but as you know, is always more comfortable to work directly through the SQLMS.
Hope this helps somehow.
We use the SSDT tools and have implemented the SQL Server Database Project Type to develop our databases:
http://www.techrepublic.com/blog/data-center/auto-deploy-and-version-your-sql-server-database-with-ssdt/
The definition of database objects and peripheral SQL Code (e.g. functions, sprocs, triggers etc) sit within the Visual Studio project and all changes are managed through VS. The interface is very similar to SSMS and, at this point doesn't cause any issues.
The benefits of this approach for us are as follows:
An existing SQL database can be imported into the SQL Server Project and managed through Visual Studio.
SQL object definitions & code can managed through the same version control system as the rest of the application code.
SQL Code can be checked for errors within Visual Studio in much the same way as you'd check your C# / VB for compilation / reference errors.
You can compare database schema's (within Visual Studio) between environments and easily identify key changes that you need to be aware of.
The SQL project can be compiled into a DACPAC file for automating deployment to different servers using a CI / Build Server (using the sqlpackage.exe utility without any custom scripts or code).
In essence developers can have a local version of the database to work on but would manage any changes through VS, then publish the changes to their local database. Once the changes are complete, the changes are committed to your version control system and then built centrally & automatically through a CI / Build server to ensure that all changes integrate and play nicely in much the same way that your other code is.
Hope that helps :)
We're trying out VS2010 database projects for a new development, using the following dev cycle:
Use Management Studio to develop changes on a local DB instance (using the designers etc)
Use VS2010 schema compare to sync / import these changes to the VSDB project
Check in the VSDB project and run automated build / test etc
When I want to 'get latest' from source control, I then:
Update the VSDB project files from source control
Use Schema Compare to push the changes from the project to my local database instance
This is where it starts to break down... Because schema compare is trying to synchronise the two versions, it attempts to undo any changes I've made to my local database as part of my own feature development.
Obviously, you can tell schema compare to skip changes to the objects I've modified, but sadly this doesn't always work correctly: http://connect.microsoft.com/VisualStudio/feedback/details/564026/strange-schema-compare-behavior-sql-2008-database-projects.
Fundamentally, the problem exists because the definitions in the VSDB project are not automatically synchronised with my local database; thus I need to use Schema Compare to do a 'poor mans merge' every time I get a change.
One possible solution could be to:
Use Schema Compare to sync any changes from my local DB to the VSDB project first
Update the VSDB project from source control (therefore using the source control tooling to do the merge, rather than Schema Compare)
Schema Compare the changes from source control into my local DB instance
...which is far from ideal.
Is RedGate SQL Source Control better in this regard?
What about the new 'Juneau' SQL toolset?
you use 'Deploy' to push source changes to the database. Either Deploy Solution from the top-row Build menu, or you can right-click on the project in the Solution Explorer and select Deploy.
Deploy is configurable in the Project properties.
HTH
Your process is backwards which is why this is difficult. Changes should flow from VSDB to your database, not the other way around. Try this:
Use the designers in Management Studio if you like them but script
out any changes you make and add them into your VSDB project.
Instead of using Schema Compare use the built in Deployment functionality. This will automatically script and deploy the incremental changes to your local database in a single click
Since you mention other possible solutions, I'll elaborate on how our shop manages data structure changes and propogation to dev db's.
For tracking and applying differences, we've written a C# app that effectively abstracts database actions out to classes that we append to an Action list. The engine dynamically loads modules that represent database versions, and adds each item in the module to a list of actions to be performed for that version upgrade, then processes the list. Actions include DataRowInsertAction, TableCreateAction, ColumnModifyAction, etc.
One benefit of using this approach was that we were able to commit standard .cs files to subversion and users can bring their own dev databases up-to-date simply by checking out the latest and runnig it. Another huge advantage is that we can target multiple database engines, since the Actions themselves know what SQL to render based on which database engine is being targeted.
As a side note, we use AdeptSQL to compare databases, and love it. It'll create a complete list of differences, and you can generate a script to go either direction (given Database 'A' and Database 'B', upgrade A to B, or downgrade B to A.)
For a small additional charge, they offer extended functionality to perform a data diff as well.
http://www.adeptsql.com/
The idea behind SQL Source Control is basically to turn the development process on its' head - instead of working with database scripts and pushing the changes to a database, you make the changes to the database and SQL Source Control calculates the deltas and updates the local scripts and allows you to commit the changes to your source control system.
SQL Source Control currently only integrates with SQL Server Management Studio, but there is now a VS package called SQL Connect that you can use in VS 2010 to work in much the same way as in SQL Source Control. http://www.red-gate.com/products/sql-development/sql-connect/index-2
My main problem is where does database go?
The project will be on SVN and is developed using asp.net mvc repository pattern. Where do I put the sql server database (mdf file)? If I put it in app_data, then my other team mates can check out the source and database and run it with the database being deployed in the vs instance.
The problem with this method are:
I cannot use SQL Management Studio with this database.
Most web hosts require me to deploy the database using their UI or SQL Management studio. Putting it in App Data will make no sense.
Connection String has to be edited each time I'm moving from testing locally to testing on the web host.
If I create the database using SQL Management studio, my problems are:
How do I keep this consistent with the source control (team mates have to re-script the db if the schema changes).
Connection string again. (I'd like to automatically use the string when on production server).
Is there a solution to all my problems above? Maybe some form of patterns of tools that I am missing?
Basically your two points are correct - unless you're working off a central database everyone will have to update their database when changes are made by someone else. If you're working off a central database you can also get into the issues where a database change is made (ie: a column dropped), and the corresponding source code isn't checked in. Then you're all dead in the water until the source code is checked in, or the database is rolled back. Using a central database also means developers have no control over when databsae schema changes are pushed to them.
We have the database installed on each developer's machine (especially good since we target different DBs, each developer has one of the supported databases giving us really good cross platform testing as we go).
Then there is the central 'development' database which the 'development' environment points to. It is build by continuous integration each checkin, and upon successful build/test it publishes to development.
Changes that developers make to the database schema on their local machine need to be checked into source control. They are database upgrade scripts that make the required changes to the database from version X to version Y. The database is versioned. When a customer upgrades, these database scripts are run on their database to bring it up from their current version to the required version they're installing.
These dbpatch files are stored in the following structure:
./dbpatches
./23
./common
./CONV-2345.dbpatch
./pgsql
./CONV-2323.dbpatch
./oracle
./CONV-2323.dbpatch
./mssql
./CONV-2323.dbpatch
In the above tree, version 23 has one common dbpatch that is run on any database (is ANSI SQL), and a specific dbpatch for the three databases that require vendor specific SQL.
We have a database update script that developers can run which runs any dbpatch that hasn't been run on their development machine yet (irrespective of version - since multiple dbpatches may be committed to source control during a single version's development).
Connection strings are maintained in NHibernate.config, however if present, NHibernate.User.config is used instead, however NHibernate.User.config is ignored from source control. Each developer has their own NHibernate.User.config, which points to their local database and sets the appropriate dialects etc.
When being pushed to development we have a NAnt script which does variable substitution in the config templates for us. This same script is used when going to staging as well as when doing packages for release. The NAnt script populates a templates config file with variable values from the environment's settings file.
Use management studio or Visual Studios server explorer. App_Data isn't used much "in the real world".
This is always a problem. Use a tool like SqlCompare from Redgate or the built in Database Compare tools of Visual Studio 2010.
Use Web.Config transformations to automatically update the connection string.
I'm not an expert by any means but here's what my partner and I did for our most recent ASP.NET MVC project:
Connection strings were always the same since we were both running SQL Server Express on our development machines, as were our staging and production servers. You can just use a dot instead of the computer name (eg. ".\SQLEXPRESS" or ".\SQL_Named_Instance").
Alternatively you could also use web.config transformations for deploying to different machines.
As far as the database itself, we just created a "Database Updates" folder in the SVN repository and added new SQL scripts when updates needed to be made. I always thought it was a good idea to have an organized collection of database change scripts anyway.
A common solution to this type of problem is to have the database versioning handled in code rather than storing the database itself in version control. The code is typically executed on app_start but could be triggered in other ways (build/deploy process). Then developers can run their own local databases or use a shared development database. The common term for this is called database migrations (migrating from one version to the next). Here is a stackoverflow question for .net tools/libraries to make this easier: https://stackoverflow.com/questions/8033/database-migration-library-for-net
This is the only way I would handle this on projects with multiple developers. I've used this successfully with teams of over 50 developers and it's worked great.
The Red Gate solution would be to use SQL Source Control, which integrates into SSMS. Its maintains a sql scripts folder structure in source control, which you can keep in the same folder/ respository that you keep your app code in.
http://www.red-gate.com/products/SQL_Source_Control/