Sharing database project in TFS - sql-server

I created a database project as part of my solution with scripts for my tables. I'm using database first, so all I do is run the project to build/deploy my tables to the database.
I'm working with a few others so I checked the SQL project into TFS.
So the other people can get the solution, run the SQL project and generate the local database for themselves.
The problem is, it might generate them under another local instance. For instance, on my home computer, it generated it under (localdb)\Projects, but on my laptop, under (localdb)\ProjectsV12.
This breaks the connection strings (which of course can be fixed). But this leaves me wondering, is there a better way to develop the SQL project collaboratively?

If you have the SQL code under source control then everyone can open that solution to edit/create copies of the database. Ideally you have an automated process but that does not work for local dev.
Since sharing code is a bad idea and you have expressed that the database is used by more than one solution I would consider packaging and distributing it.
If you create a SSDT database project you can compile your database into a package that can upgrade any instance. You can then share that .dacpac output easily.
You might even what to share it with Nuget so that each dependency is automatically updated.

You can setup a SQL alias name to standardize the connection string across developer machines.
Alias .\SQLEXPRESS to (LocalDB)\MSSQLLocalDB

Related

Deploy multiple databases with SSDT

We have an existing system with multiple databases on one SQL Server instance, and we want to deploy database changes using SQL Server Data Tools. Thus I've created a solution with one database project per database.
When I run a build, it creates a .dacpac file for each project. Ideally we want to bundle the deployment of database changes, such that all databases are deployed in one shot. I've seen that database projects can reference other projects and suppose that you can use this mechanism for bundling as well - but I am reluctant to add references just for the sake of deployment.
What is the recommended way to deploy multiple databases in one package?
I don't think you can do this. By default, each database gets its own dacpac. You can set up a script that can build/publish all databases in one shot, but it will do them one at a time. I created a basic batch file some time ago that would build all of the dacpacs and publish each of them in order.
Surprising there isn’t a solid answer to this. I know red gate has sql automate tool, but your company will have to pay for it. Interested if you got a solid answer

Copy local SqlServer database with visual studio solution

I've created a simple sports store (From Pro ASP.NET MVC 5 freeman) that I keep on a flash drive, due to working on my desktop computer, laptop, or a lab computer on campus. Is there an easy way to bring the database tables along with it?
First, I'd recommend looking into a version control system (SVN, Git, etc). Even if you don't use it for versioning, it will--at a minimum--help you sync your code between different computers.
The simplest (yet most basic) way of bringing your database along with the code is just to add a SQL schema script along side your code and run it on the local SQL server on each machine. However, this approach has the limitation of not handling schema updates very well...which brings me to my recommended solution: use an ORM (i.e. Entity Framework) and let it handle the creation of the database. If your schema needs to change, it will automatically generate migration scripts that can update the database on your other machines.

TFS and DATABASE PROJECTS (SQL Server)

We originally dismissed using database projects in conjunction with TFS as our solution for our deployment and soucecontrol needs. However, in the interest of thoroughness, I'm exploring and prototyping it.
I've set up my database project (with add to source control checked). I've checked in the changes. Now, where do you develop from?
I've tried ...
connecting to the remote development server to make changes
syncing schema to (localdb)\Projects and making changes there
directly in the Source Control Explorer
With option 1 and 2 I don't see an automated way to add code to source control. Am I suppose to be working in the Source Control Explorer? (this seems a little silly)... Is there a way to commit the entire solution to source control? My apologies in advance, I'm a database developer and this concept of a "solution" is very foreign to me.
Also there were a lot of chatter about Visual Studios doing a lot of ugly things in the back ground that turned a lot of development shops off of database projects. Can someone share your experiences with me? Some of the pitfalls and gotchas.
And yes, we have looked at Redgate SourceControl (very nice tool).
Generally people do one of two things:
Develop in Visual Studio, via the Solution Explorer. Just open the project like you would any other project, add tables, indexes, etc. You even get the same GUI for editing DB objects as you get in SSMS. All changes will automatically be added to TFS Pending changes (just like any other code change), and can be checked in when you're ready.
Deploy the latest DB (using Publish in VS) to any SQL Server, make your changes in SSMS, then do a Schema Compare in Visual Studio to bring your changes back into your DB project so they can be checked into TFS.
I've been using DB projects for many years and I LOVE them! Every developer I've introduced them to, refuses to develop without them from that point on.
I'm going to explain you briefly how we use DB projects with TFS.
We basically have one DB already done and if we require any changes or new tables we create them or alter them directly in SQL Server (each developer has its own dev SQL Server).
Then in VS from the SQL Server Object Explorer we drag the tables we want into the DB project so when we check in the changes, every user in TFS would be able to get them and then publish that project that will generate and execute a script into the DB.
This is the way we use to develop when we need to add specific tables or records to the DB so we don't have to send emails with scripts or have them stored in an specific location (even with source control). This way we can get latest version of the project and publish it to ensure we have the latest DB version although it requires the user (who made the changes) to add them to the DB project.
Other way could be to do all the changes (and can be done without any problem) directly in the DB project and then publish it. That one would be a more right way to do it so you do all the changes directly in a source controlled project, but as you know, is always more comfortable to work directly through the SQLMS.
Hope this helps somehow.
We use the SSDT tools and have implemented the SQL Server Database Project Type to develop our databases:
http://www.techrepublic.com/blog/data-center/auto-deploy-and-version-your-sql-server-database-with-ssdt/
The definition of database objects and peripheral SQL Code (e.g. functions, sprocs, triggers etc) sit within the Visual Studio project and all changes are managed through VS. The interface is very similar to SSMS and, at this point doesn't cause any issues.
The benefits of this approach for us are as follows:
An existing SQL database can be imported into the SQL Server Project and managed through Visual Studio.
SQL object definitions & code can managed through the same version control system as the rest of the application code.
SQL Code can be checked for errors within Visual Studio in much the same way as you'd check your C# / VB for compilation / reference errors.
You can compare database schema's (within Visual Studio) between environments and easily identify key changes that you need to be aware of.
The SQL project can be compiled into a DACPAC file for automating deployment to different servers using a CI / Build Server (using the sqlpackage.exe utility without any custom scripts or code).
In essence developers can have a local version of the database to work on but would manage any changes through VS, then publish the changes to their local database. Once the changes are complete, the changes are committed to your version control system and then built centrally & automatically through a CI / Build server to ensure that all changes integrate and play nicely in much the same way that your other code is.
Hope that helps :)

Deploying MSSQL change scripts

So I am in the throes of developing our Continuous Integration practices. We are a .Net/MSSQL shop. We will all soon be on VS2012. We have settled on CruiseControl.Net for CI server, using msbuild to compile our projects. We use SVN (possibly switching to Git later, but that's another discussion) for source control. I'm leaning towards using InstallShield to deploy code packages (usually web apps and/or batch exeutables) to our QA and production servers. (CCNet would build these MSI's as part of our CI.) We are also starting to include unit testing in our projects, and will use NUnit integrated with CCNet to run them automatically upon check-in.
So far this works for our standard web app/exe development. Where it does not fit in (yet) is with our MSSQL change management, or lack thereof. It's been pretty cowboy how we've done this. Some folks have used Migrator.Net. Others just do a SQL Compare with Redgate and generate a script. Still others have hand-written sql scripts. It may or may not be in SVN. "Source control" at the db level is basically "we have backups of our databases." Boo, hiss. Needless to say that if we want some consistency with our CI and with our deployments, we need to settle on something. So far I am leaning towards using VS SQL projects to handle the change management and deployment.
Note: we (developers) are not supposed to push changes. Sys admins do that. So we can't run anything to deploy code or sql.
So, 2 problems to solve (I think):
What "technique" to use so that our CI server blows away a CI version of the database so that unit tests can be tested against it. I've settled that VS2012 SQL projects can do that. CCNet can run msbuild against the db project, which recreates the database. This is fairly easy.
How to generate change scripts for our QA and prod environments? This one I'm stuck on.
VS can do a schema compare and then generate the sql script -- but it is dependent on sqlcmd. So our sys admins would have to run sqlcmd from the command prompt to deploy it... probably not ideal. Right?
I could run msbuild again to deploy... but I don't want the database re-created, I just want changes deployed.
So what are the options here? I need something self-contained for the admins to run -- and check-in to SVN. Should I make another msi for database deployments? Can CCNet/msbuild make some other kind of "deployment package" for database changes (not re-creation) where the sys admins can double-click and go?
How do you all handle this?
Thanks
Tom
Check out the SQL Server Data Tools package from the Microsoft site.
This will register a new SQL Server 2012 Database type project to contain the definition for all of your database structures. Upon build, this will generate a create script that you can use to deploy your database.
Then for upgrading your database, use the SQLPACKAGE.EXE tool using the create script and target database server name to generate an Update.sql script.
Update: Also on the issue of how you're running unit tests, you could create supplemental methodologies that invoke the create scripts by launching a process and and passing the path to the output create.sql script, then have your tests 'tear down' the database using the same method but with a drop database statement.

Proper structure of asp.net website and database in visual studio

My main problem is where does database go?
The project will be on SVN and is developed using asp.net mvc repository pattern. Where do I put the sql server database (mdf file)? If I put it in app_data, then my other team mates can check out the source and database and run it with the database being deployed in the vs instance.
The problem with this method are:
I cannot use SQL Management Studio with this database.
Most web hosts require me to deploy the database using their UI or SQL Management studio. Putting it in App Data will make no sense.
Connection String has to be edited each time I'm moving from testing locally to testing on the web host.
If I create the database using SQL Management studio, my problems are:
How do I keep this consistent with the source control (team mates have to re-script the db if the schema changes).
Connection string again. (I'd like to automatically use the string when on production server).
Is there a solution to all my problems above? Maybe some form of patterns of tools that I am missing?
Basically your two points are correct - unless you're working off a central database everyone will have to update their database when changes are made by someone else. If you're working off a central database you can also get into the issues where a database change is made (ie: a column dropped), and the corresponding source code isn't checked in. Then you're all dead in the water until the source code is checked in, or the database is rolled back. Using a central database also means developers have no control over when databsae schema changes are pushed to them.
We have the database installed on each developer's machine (especially good since we target different DBs, each developer has one of the supported databases giving us really good cross platform testing as we go).
Then there is the central 'development' database which the 'development' environment points to. It is build by continuous integration each checkin, and upon successful build/test it publishes to development.
Changes that developers make to the database schema on their local machine need to be checked into source control. They are database upgrade scripts that make the required changes to the database from version X to version Y. The database is versioned. When a customer upgrades, these database scripts are run on their database to bring it up from their current version to the required version they're installing.
These dbpatch files are stored in the following structure:
./dbpatches
./23
./common
./CONV-2345.dbpatch
./pgsql
./CONV-2323.dbpatch
./oracle
./CONV-2323.dbpatch
./mssql
./CONV-2323.dbpatch
In the above tree, version 23 has one common dbpatch that is run on any database (is ANSI SQL), and a specific dbpatch for the three databases that require vendor specific SQL.
We have a database update script that developers can run which runs any dbpatch that hasn't been run on their development machine yet (irrespective of version - since multiple dbpatches may be committed to source control during a single version's development).
Connection strings are maintained in NHibernate.config, however if present, NHibernate.User.config is used instead, however NHibernate.User.config is ignored from source control. Each developer has their own NHibernate.User.config, which points to their local database and sets the appropriate dialects etc.
When being pushed to development we have a NAnt script which does variable substitution in the config templates for us. This same script is used when going to staging as well as when doing packages for release. The NAnt script populates a templates config file with variable values from the environment's settings file.
Use management studio or Visual Studios server explorer. App_Data isn't used much "in the real world".
This is always a problem. Use a tool like SqlCompare from Redgate or the built in Database Compare tools of Visual Studio 2010.
Use Web.Config transformations to automatically update the connection string.
I'm not an expert by any means but here's what my partner and I did for our most recent ASP.NET MVC project:
Connection strings were always the same since we were both running SQL Server Express on our development machines, as were our staging and production servers. You can just use a dot instead of the computer name (eg. ".\SQLEXPRESS" or ".\SQL_Named_Instance").
Alternatively you could also use web.config transformations for deploying to different machines.
As far as the database itself, we just created a "Database Updates" folder in the SVN repository and added new SQL scripts when updates needed to be made. I always thought it was a good idea to have an organized collection of database change scripts anyway.
A common solution to this type of problem is to have the database versioning handled in code rather than storing the database itself in version control. The code is typically executed on app_start but could be triggered in other ways (build/deploy process). Then developers can run their own local databases or use a shared development database. The common term for this is called database migrations (migrating from one version to the next). Here is a stackoverflow question for .net tools/libraries to make this easier: https://stackoverflow.com/questions/8033/database-migration-library-for-net
This is the only way I would handle this on projects with multiple developers. I've used this successfully with teams of over 50 developers and it's worked great.
The Red Gate solution would be to use SQL Source Control, which integrates into SSMS. Its maintains a sql scripts folder structure in source control, which you can keep in the same folder/ respository that you keep your app code in.
http://www.red-gate.com/products/SQL_Source_Control/

Resources