Proper structure of asp.net website and database in visual studio - sql-server

My main problem is where does database go?
The project will be on SVN and is developed using asp.net mvc repository pattern. Where do I put the sql server database (mdf file)? If I put it in app_data, then my other team mates can check out the source and database and run it with the database being deployed in the vs instance.
The problem with this method are:
I cannot use SQL Management Studio with this database.
Most web hosts require me to deploy the database using their UI or SQL Management studio. Putting it in App Data will make no sense.
Connection String has to be edited each time I'm moving from testing locally to testing on the web host.
If I create the database using SQL Management studio, my problems are:
How do I keep this consistent with the source control (team mates have to re-script the db if the schema changes).
Connection string again. (I'd like to automatically use the string when on production server).
Is there a solution to all my problems above? Maybe some form of patterns of tools that I am missing?

Basically your two points are correct - unless you're working off a central database everyone will have to update their database when changes are made by someone else. If you're working off a central database you can also get into the issues where a database change is made (ie: a column dropped), and the corresponding source code isn't checked in. Then you're all dead in the water until the source code is checked in, or the database is rolled back. Using a central database also means developers have no control over when databsae schema changes are pushed to them.
We have the database installed on each developer's machine (especially good since we target different DBs, each developer has one of the supported databases giving us really good cross platform testing as we go).
Then there is the central 'development' database which the 'development' environment points to. It is build by continuous integration each checkin, and upon successful build/test it publishes to development.
Changes that developers make to the database schema on their local machine need to be checked into source control. They are database upgrade scripts that make the required changes to the database from version X to version Y. The database is versioned. When a customer upgrades, these database scripts are run on their database to bring it up from their current version to the required version they're installing.
These dbpatch files are stored in the following structure:
./dbpatches
./23
./common
./CONV-2345.dbpatch
./pgsql
./CONV-2323.dbpatch
./oracle
./CONV-2323.dbpatch
./mssql
./CONV-2323.dbpatch
In the above tree, version 23 has one common dbpatch that is run on any database (is ANSI SQL), and a specific dbpatch for the three databases that require vendor specific SQL.
We have a database update script that developers can run which runs any dbpatch that hasn't been run on their development machine yet (irrespective of version - since multiple dbpatches may be committed to source control during a single version's development).
Connection strings are maintained in NHibernate.config, however if present, NHibernate.User.config is used instead, however NHibernate.User.config is ignored from source control. Each developer has their own NHibernate.User.config, which points to their local database and sets the appropriate dialects etc.
When being pushed to development we have a NAnt script which does variable substitution in the config templates for us. This same script is used when going to staging as well as when doing packages for release. The NAnt script populates a templates config file with variable values from the environment's settings file.

Use management studio or Visual Studios server explorer. App_Data isn't used much "in the real world".
This is always a problem. Use a tool like SqlCompare from Redgate or the built in Database Compare tools of Visual Studio 2010.
Use Web.Config transformations to automatically update the connection string.

I'm not an expert by any means but here's what my partner and I did for our most recent ASP.NET MVC project:
Connection strings were always the same since we were both running SQL Server Express on our development machines, as were our staging and production servers. You can just use a dot instead of the computer name (eg. ".\SQLEXPRESS" or ".\SQL_Named_Instance").
Alternatively you could also use web.config transformations for deploying to different machines.
As far as the database itself, we just created a "Database Updates" folder in the SVN repository and added new SQL scripts when updates needed to be made. I always thought it was a good idea to have an organized collection of database change scripts anyway.

A common solution to this type of problem is to have the database versioning handled in code rather than storing the database itself in version control. The code is typically executed on app_start but could be triggered in other ways (build/deploy process). Then developers can run their own local databases or use a shared development database. The common term for this is called database migrations (migrating from one version to the next). Here is a stackoverflow question for .net tools/libraries to make this easier: https://stackoverflow.com/questions/8033/database-migration-library-for-net
This is the only way I would handle this on projects with multiple developers. I've used this successfully with teams of over 50 developers and it's worked great.

The Red Gate solution would be to use SQL Source Control, which integrates into SSMS. Its maintains a sql scripts folder structure in source control, which you can keep in the same folder/ respository that you keep your app code in.
http://www.red-gate.com/products/SQL_Source_Control/

Related

Sharing database project in TFS

I created a database project as part of my solution with scripts for my tables. I'm using database first, so all I do is run the project to build/deploy my tables to the database.
I'm working with a few others so I checked the SQL project into TFS.
So the other people can get the solution, run the SQL project and generate the local database for themselves.
The problem is, it might generate them under another local instance. For instance, on my home computer, it generated it under (localdb)\Projects, but on my laptop, under (localdb)\ProjectsV12.
This breaks the connection strings (which of course can be fixed). But this leaves me wondering, is there a better way to develop the SQL project collaboratively?
If you have the SQL code under source control then everyone can open that solution to edit/create copies of the database. Ideally you have an automated process but that does not work for local dev.
Since sharing code is a bad idea and you have expressed that the database is used by more than one solution I would consider packaging and distributing it.
If you create a SSDT database project you can compile your database into a package that can upgrade any instance. You can then share that .dacpac output easily.
You might even what to share it with Nuget so that each dependency is automatically updated.
You can setup a SQL alias name to standardize the connection string across developer machines.
Alias .\SQLEXPRESS to (LocalDB)\MSSQLLocalDB

TFS and DATABASE PROJECTS (SQL Server)

We originally dismissed using database projects in conjunction with TFS as our solution for our deployment and soucecontrol needs. However, in the interest of thoroughness, I'm exploring and prototyping it.
I've set up my database project (with add to source control checked). I've checked in the changes. Now, where do you develop from?
I've tried ...
connecting to the remote development server to make changes
syncing schema to (localdb)\Projects and making changes there
directly in the Source Control Explorer
With option 1 and 2 I don't see an automated way to add code to source control. Am I suppose to be working in the Source Control Explorer? (this seems a little silly)... Is there a way to commit the entire solution to source control? My apologies in advance, I'm a database developer and this concept of a "solution" is very foreign to me.
Also there were a lot of chatter about Visual Studios doing a lot of ugly things in the back ground that turned a lot of development shops off of database projects. Can someone share your experiences with me? Some of the pitfalls and gotchas.
And yes, we have looked at Redgate SourceControl (very nice tool).
Generally people do one of two things:
Develop in Visual Studio, via the Solution Explorer. Just open the project like you would any other project, add tables, indexes, etc. You even get the same GUI for editing DB objects as you get in SSMS. All changes will automatically be added to TFS Pending changes (just like any other code change), and can be checked in when you're ready.
Deploy the latest DB (using Publish in VS) to any SQL Server, make your changes in SSMS, then do a Schema Compare in Visual Studio to bring your changes back into your DB project so they can be checked into TFS.
I've been using DB projects for many years and I LOVE them! Every developer I've introduced them to, refuses to develop without them from that point on.
I'm going to explain you briefly how we use DB projects with TFS.
We basically have one DB already done and if we require any changes or new tables we create them or alter them directly in SQL Server (each developer has its own dev SQL Server).
Then in VS from the SQL Server Object Explorer we drag the tables we want into the DB project so when we check in the changes, every user in TFS would be able to get them and then publish that project that will generate and execute a script into the DB.
This is the way we use to develop when we need to add specific tables or records to the DB so we don't have to send emails with scripts or have them stored in an specific location (even with source control). This way we can get latest version of the project and publish it to ensure we have the latest DB version although it requires the user (who made the changes) to add them to the DB project.
Other way could be to do all the changes (and can be done without any problem) directly in the DB project and then publish it. That one would be a more right way to do it so you do all the changes directly in a source controlled project, but as you know, is always more comfortable to work directly through the SQLMS.
Hope this helps somehow.
We use the SSDT tools and have implemented the SQL Server Database Project Type to develop our databases:
http://www.techrepublic.com/blog/data-center/auto-deploy-and-version-your-sql-server-database-with-ssdt/
The definition of database objects and peripheral SQL Code (e.g. functions, sprocs, triggers etc) sit within the Visual Studio project and all changes are managed through VS. The interface is very similar to SSMS and, at this point doesn't cause any issues.
The benefits of this approach for us are as follows:
An existing SQL database can be imported into the SQL Server Project and managed through Visual Studio.
SQL object definitions & code can managed through the same version control system as the rest of the application code.
SQL Code can be checked for errors within Visual Studio in much the same way as you'd check your C# / VB for compilation / reference errors.
You can compare database schema's (within Visual Studio) between environments and easily identify key changes that you need to be aware of.
The SQL project can be compiled into a DACPAC file for automating deployment to different servers using a CI / Build Server (using the sqlpackage.exe utility without any custom scripts or code).
In essence developers can have a local version of the database to work on but would manage any changes through VS, then publish the changes to their local database. Once the changes are complete, the changes are committed to your version control system and then built centrally & automatically through a CI / Build server to ensure that all changes integrate and play nicely in much the same way that your other code is.
Hope that helps :)

Database development and deployment methods and tools

My team develop a web application using ASP.NET. The application is very much based on database (We use SQL Server). Most features require database development, in addition to server and client side code. We use GIT as our source code management system.
In order to check in (and later deploy) the database changes, we create SQL scripts and check them in. Our installer knows to run them and this is how we deploy these changes. Using scripts is very uncomfortable to merge changes (for example, if two developers added a column to the same table).
So my question is what other method or tool can you suggest? I know Visual Studio has a database project which may be useful, I still haven't learned about it, I wonder if there are other options out there before I start learning about it.
Thanks!
I think, you have to add in worlflow and use Liquibase from the first steps of database development (check Liquibase Quick-Start, where changelog started from creating initial structures).
From developers POV adding Liquibase means appearing of additional XML-file(s) in source-tree, when database-schema have to be changed in some changeset
First full disclosure that I work for Red Gate who make this product...
You might be interested in taking a look at SQL Source Control. It's a plugin for SSMS that connects your development database to your existing version control system, git in your case.
When a developer makes a change to a dev database (either in a dedicated local copy, or in a shared dev database) then this change is detected and can then be committed to the repository. Other developers can pick up this change, and you can propagate it up to other environments.
When it comes to deployment you can then use the SQL Compare tool to deploy from a specific revision in your repository that you check out.
It's really handy in cases like your example with two developers making a change to the same table. Either the 2nd developer can pick up the change from the version control system before they commit their change. Or you can use the branching/merging features of git to track these in separate branches and deploy them as separate changes. There's scope to couple this into CI systems too.
Some specifics on running SQL Source Control with git:
http://datachomp.com/archives/git-and-red-gate-sql-source-control/
And a link to a more general set-up guide
http://www.troyhunt.com/2010/07/rocking-your-sql-source-control-world.html

Database name in Source control

We're developing an aspx project with Visual Studio 2010 Professional, SQL Server 2008 R2 and Team Foundation Server 2010. Since the development is being carried out in multiple offices, each developer has their own local instances of the databases.
I want to bring these multiple databases under source control (or at least the schemas of the DB, structure and stored procedures - data doesn't matter to me). My preferred approach is to add database projects to the VS solution, which is already source controlled in TFS. Any changes will be distributed by TFS, and can be deployed locally.
The problem I'm having is that the database projects contain a reference to a local database instance (server & name). When someone gets the latest version of my changes, they will have a reference to my local DB instance (which is different to their local DB instance). They would need to change the DB details (thus checking the dbproj out) in order to get my updates.
So, is there any way that the database server & name can be left out of source control while the schemas remain under source control? Any help would be much appreciated!
I'm not sure if you can. However, you could use an alias, so all of the developers use a database on their local machine, but referenced by the same alias.
Take a look at: http://www.mssqltips.com/sqlservertip/1620/how-to-setup-and-use-a-sql-server-alias/ for how to set an alias up.
That way you can separate the database from the connection details.
I'm involved in developing a unique enforced database source control solution called DBmaestro TeamWork.
It has a plugin to SSMS which allows the developer to work directly on the database objects (change their working environment), run their tests and then perform Check-In which reads the metadata (tables' structure, procedures, functions, views etc.) to the version control repository.
With the Impact Analysis it is easy to merge changes from different databases to a single database.
The impact analysis algorithm perform 3-way analysis (not just a simple compare & sync) to identify changes origin from developerA which should not be reverted when developer merge his changes and it ignores the database name when running the impact analysis or generating the delta script.

managing Sql Server databases version control in large teams

For the last few years I was the only developer that handled the databases we created for our web projects. That meant that I got full control of version management. I can't keep up with doing all the database work anymore and I want to bring some other developers into the cycle.
We use Tortoise SVN and store all repositories on a dedicated server in-house. Some clients require us not to have their real data on our office servers so we only keep scripts that can generate the structure of their database along with scripts to create useful fake data. Other times our clients want us to have their most up to date information on our development machines.
So what workflow do larger development teams use to handle version management and sharing of databases. Most developers prefer to deploy the database to an instance of Sql Server on their development machine. Should we
Keep the scripts for each database in SVN and make developers export new scripts if they make even minor changes
Detach databases after changes have been made and commit MDF file to SVN
Put all development copies on a server on the in-house network and force developers to connect via remote desktop to make modifications
Some other option I haven't thought of
Never have an MDF file in the development source tree. MDFs are a result of deploying an application, not part of the application sources. Thinking at the database in terms of development source is a short-cut to hell.
All the development deliverables should be scripts that deploy or upgrade the database. Any change, no matter how small, takes the form of a script. Some recommend using diff tools, but I think they are a rat hole. I champion version the database metadata and having scripts to upgrade from version N to version N+1. At deployment the application can check the current deployed version, and it then runs all the upgrade scripts that bring the version to current. There is no script to deploy straight the current version, a new deployment deploys first v0 of the database, it then goes through all version upgrades, including dropping object that are no longer used. While this may sound a bit extreme, this is exactly how SQL Server itself keeps track of the various changes occurring in the database between releases.
As simple text scripts, all the database upgrade scripts are stored in version control just like any other sources, with tracking of changes, diff-ing and check-in reviews.
For a more detailed discussion and some examples, see Version Control and your Database.
Option (1). Each developer can have their own up to date local copy of the DB. (Up to date meaning, recreated from latest version controlled scripts (base + incremental changes + base data + run data). In order to make this work you should have the ability to 'one-click' deploy any database locally.
You really cannot go wrong with a tool like Visual Studio Database Edition. This is a version of VS that manages database schemas and much more, including deployments (updates) to target server(s).
VSDE integrates with TFS so all your database schema is under TFS version control. This becomes the "source of truth" for your schema management.
Typically developers will work against a local development database, and keep its schema up to date by synchronizing it with the schema in the VSDE project. Then, when the developer is satisfied with his/her changes, they are checked into TFS, and a build and then deployment can be done.
VSDE also supports refactoring, schema compares, data compares, test data generation and more. It's a great tool, and we use it to manage our schemas.
In a previous company (which used Agile in monthly iterations), .sql files were checked into version control, and (an optional) part of the full build process was to rebuild the database from production then apply each .sql file in order.
At the end of the iteration, the .sql instructions were merged into the script that creates the production build of the database, and the script files moved out. So you're only applying updates from the current iteration, not going back til the beginning of the project.
Have you looked at a product called DB Ghost? I have not personally used it but it looks comprehensive and may offer an alternative as part point 4 in your question.

Resources