We have an existing system with multiple databases on one SQL Server instance, and we want to deploy database changes using SQL Server Data Tools. Thus I've created a solution with one database project per database.
When I run a build, it creates a .dacpac file for each project. Ideally we want to bundle the deployment of database changes, such that all databases are deployed in one shot. I've seen that database projects can reference other projects and suppose that you can use this mechanism for bundling as well - but I am reluctant to add references just for the sake of deployment.
What is the recommended way to deploy multiple databases in one package?
I don't think you can do this. By default, each database gets its own dacpac. You can set up a script that can build/publish all databases in one shot, but it will do them one at a time. I created a basic batch file some time ago that would build all of the dacpacs and publish each of them in order.
Surprising there isn’t a solid answer to this. I know red gate has sql automate tool, but your company will have to pay for it. Interested if you got a solid answer
Related
After an initial publish to a SQL Server(2019) and after the initial create of my DB project (data tier app) when publishing again it is failing with a drift report. No changes done to the database externally or within the VS project.
Drift report:
<Modifications>
<Object Name="[DummySqlLogin]" Parent="" Type="SqlUser" />
</Modifications>
To try and circumvent this by following a suggestion in answer on a question/answer from 2016.
"From your publish config file, use the following"
<ExcludeUsers>True</ExcludeUsers>
<ExcludeLogins>True</ExcludeLogins>
Unlike the OP this does allow me to publish to my database which now leaves me to the question of how to deal with logins/passwords especially in a scenario where we are going to be publishing to different environments.
I was planning on using SQLCMDVARIABLES to maintain separate Publish profiles for different types of environments and within there could specify passwords for each and then within the VS Database Project place a PreDeployment script which would setup Logins/Passwords for SQL accounts and make use of the SQLCMD variables.
Is there no better way of doing this? This probably works great for when you only have 5-10 environments but what if you have a 100?
Note* I want to avoid using commercial tools such as Redgate.
Here is one of common approaches:
environments, deploy targets are no project sources.
Same for specific logins, users, passwords. Those are security/administrations items server-specific. They depend on the environment, not product features. Why would the product, project sources depend on them? Imagine you have to change password due to security reasons on one of the servers. Can you logically connect it to recompiling all the sources?
I'd suggest to consider removing all the security items from SSDT project except maybe Roles (and configure publish.xml to ignore all the removed kinds of objects). Maintaining 100+ servers surely requires different tools and approaches, SSDT, dacpacs has nothing to do with it. The solution could be based on Octopus deploy, Ansible or something else.
Can you create a generalized deployment script from a Sql Server Db Project in VS 2015 that doesn't require a schema compare / publish against a specific target database?
Some background:
We are using Sql Server Database projects to manage our database schema. Primarily we are using the projects to generate dacpacs that get pushed out to our development environments. They also get used for brand new installations of our product. Recently we have developed an add-on to our product and have created a new db project for it, referencing our core project. For new installations of our product where clients want the add-on, our new project will be deployed.
The problem we are having is that we need to be able to generate a "generic" upgrade script. Most of our existing installations were not generated via these projects and all contain many "custom" stored procedures/etc specific to that client's installation. I am looking for a way to generate a script that would do an "If Not Exists/Create + Alter" without needing to specify the target database.
Our add-on project only contains stored procedures and a couple tables, all of which will be new to any client opting for this add-on. I need to avoid dropping items not in the project while being able to deploy all of our new "stuff". I've found the option to Include Composite Objects which I can uncheck so that the deployment is specific to our add-on, but publishing still requires me to specify a target database so that a schema compare can be performed and I get scripts that are specific to that particular database. I've played with pretty much every option and cannot find a solution.
Bottom Line: Is there a way for me to generate a generic script that I can give to my deployment team whenever the add-on is requested on an existing install without needing to do a schema compare or publish for each database directly from the project?
Right now I am maintaining a separate set of .sql files in our (non db) project following the if not exists/create+alter paradigm that match the items in the db project. These get concatenated during build of our add on so that we can give our deployment team a script to run. This is proving to be cumbersome and we'd like to be able to make use of the database projects for this, if at all possible.
Best solution is to give the dacpacs to your installers. They run SQLPackage (maybe through a batch file or PowerShell) to point it at the server/DB to update. It would then generate the script or update directly. Sounds like they already have access to the servers so should be able to do this. SQLPackage should also be included on the servers or it can be run locally for the installer as long as they can see the target DB. This might help: schottsql.wordpress.com/2012/11/08/ssdt-publishing-your-project
There are a couple of examples of using PowerShell to do this, but it depends on how much you need to control DB names or Server names. A simple batch file where you edit/replace the Server/DB Names might suffice. I definitely recommend a publish profile and if this is hitting customer databases they could have modified, setting the "do not drop if not in project" options that show up is almost essential. As long as your customers haven't made wholesale changes to core objects, you should be good to go.
I created a database project as part of my solution with scripts for my tables. I'm using database first, so all I do is run the project to build/deploy my tables to the database.
I'm working with a few others so I checked the SQL project into TFS.
So the other people can get the solution, run the SQL project and generate the local database for themselves.
The problem is, it might generate them under another local instance. For instance, on my home computer, it generated it under (localdb)\Projects, but on my laptop, under (localdb)\ProjectsV12.
This breaks the connection strings (which of course can be fixed). But this leaves me wondering, is there a better way to develop the SQL project collaboratively?
If you have the SQL code under source control then everyone can open that solution to edit/create copies of the database. Ideally you have an automated process but that does not work for local dev.
Since sharing code is a bad idea and you have expressed that the database is used by more than one solution I would consider packaging and distributing it.
If you create a SSDT database project you can compile your database into a package that can upgrade any instance. You can then share that .dacpac output easily.
You might even what to share it with Nuget so that each dependency is automatically updated.
You can setup a SQL alias name to standardize the connection string across developer machines.
Alias .\SQLEXPRESS to (LocalDB)\MSSQLLocalDB
My main problem is where does database go?
The project will be on SVN and is developed using asp.net mvc repository pattern. Where do I put the sql server database (mdf file)? If I put it in app_data, then my other team mates can check out the source and database and run it with the database being deployed in the vs instance.
The problem with this method are:
I cannot use SQL Management Studio with this database.
Most web hosts require me to deploy the database using their UI or SQL Management studio. Putting it in App Data will make no sense.
Connection String has to be edited each time I'm moving from testing locally to testing on the web host.
If I create the database using SQL Management studio, my problems are:
How do I keep this consistent with the source control (team mates have to re-script the db if the schema changes).
Connection string again. (I'd like to automatically use the string when on production server).
Is there a solution to all my problems above? Maybe some form of patterns of tools that I am missing?
Basically your two points are correct - unless you're working off a central database everyone will have to update their database when changes are made by someone else. If you're working off a central database you can also get into the issues where a database change is made (ie: a column dropped), and the corresponding source code isn't checked in. Then you're all dead in the water until the source code is checked in, or the database is rolled back. Using a central database also means developers have no control over when databsae schema changes are pushed to them.
We have the database installed on each developer's machine (especially good since we target different DBs, each developer has one of the supported databases giving us really good cross platform testing as we go).
Then there is the central 'development' database which the 'development' environment points to. It is build by continuous integration each checkin, and upon successful build/test it publishes to development.
Changes that developers make to the database schema on their local machine need to be checked into source control. They are database upgrade scripts that make the required changes to the database from version X to version Y. The database is versioned. When a customer upgrades, these database scripts are run on their database to bring it up from their current version to the required version they're installing.
These dbpatch files are stored in the following structure:
./dbpatches
./23
./common
./CONV-2345.dbpatch
./pgsql
./CONV-2323.dbpatch
./oracle
./CONV-2323.dbpatch
./mssql
./CONV-2323.dbpatch
In the above tree, version 23 has one common dbpatch that is run on any database (is ANSI SQL), and a specific dbpatch for the three databases that require vendor specific SQL.
We have a database update script that developers can run which runs any dbpatch that hasn't been run on their development machine yet (irrespective of version - since multiple dbpatches may be committed to source control during a single version's development).
Connection strings are maintained in NHibernate.config, however if present, NHibernate.User.config is used instead, however NHibernate.User.config is ignored from source control. Each developer has their own NHibernate.User.config, which points to their local database and sets the appropriate dialects etc.
When being pushed to development we have a NAnt script which does variable substitution in the config templates for us. This same script is used when going to staging as well as when doing packages for release. The NAnt script populates a templates config file with variable values from the environment's settings file.
Use management studio or Visual Studios server explorer. App_Data isn't used much "in the real world".
This is always a problem. Use a tool like SqlCompare from Redgate or the built in Database Compare tools of Visual Studio 2010.
Use Web.Config transformations to automatically update the connection string.
I'm not an expert by any means but here's what my partner and I did for our most recent ASP.NET MVC project:
Connection strings were always the same since we were both running SQL Server Express on our development machines, as were our staging and production servers. You can just use a dot instead of the computer name (eg. ".\SQLEXPRESS" or ".\SQL_Named_Instance").
Alternatively you could also use web.config transformations for deploying to different machines.
As far as the database itself, we just created a "Database Updates" folder in the SVN repository and added new SQL scripts when updates needed to be made. I always thought it was a good idea to have an organized collection of database change scripts anyway.
A common solution to this type of problem is to have the database versioning handled in code rather than storing the database itself in version control. The code is typically executed on app_start but could be triggered in other ways (build/deploy process). Then developers can run their own local databases or use a shared development database. The common term for this is called database migrations (migrating from one version to the next). Here is a stackoverflow question for .net tools/libraries to make this easier: https://stackoverflow.com/questions/8033/database-migration-library-for-net
This is the only way I would handle this on projects with multiple developers. I've used this successfully with teams of over 50 developers and it's worked great.
The Red Gate solution would be to use SQL Source Control, which integrates into SSMS. Its maintains a sql scripts folder structure in source control, which you can keep in the same folder/ respository that you keep your app code in.
http://www.red-gate.com/products/SQL_Source_Control/
For the last few years I was the only developer that handled the databases we created for our web projects. That meant that I got full control of version management. I can't keep up with doing all the database work anymore and I want to bring some other developers into the cycle.
We use Tortoise SVN and store all repositories on a dedicated server in-house. Some clients require us not to have their real data on our office servers so we only keep scripts that can generate the structure of their database along with scripts to create useful fake data. Other times our clients want us to have their most up to date information on our development machines.
So what workflow do larger development teams use to handle version management and sharing of databases. Most developers prefer to deploy the database to an instance of Sql Server on their development machine. Should we
Keep the scripts for each database in SVN and make developers export new scripts if they make even minor changes
Detach databases after changes have been made and commit MDF file to SVN
Put all development copies on a server on the in-house network and force developers to connect via remote desktop to make modifications
Some other option I haven't thought of
Never have an MDF file in the development source tree. MDFs are a result of deploying an application, not part of the application sources. Thinking at the database in terms of development source is a short-cut to hell.
All the development deliverables should be scripts that deploy or upgrade the database. Any change, no matter how small, takes the form of a script. Some recommend using diff tools, but I think they are a rat hole. I champion version the database metadata and having scripts to upgrade from version N to version N+1. At deployment the application can check the current deployed version, and it then runs all the upgrade scripts that bring the version to current. There is no script to deploy straight the current version, a new deployment deploys first v0 of the database, it then goes through all version upgrades, including dropping object that are no longer used. While this may sound a bit extreme, this is exactly how SQL Server itself keeps track of the various changes occurring in the database between releases.
As simple text scripts, all the database upgrade scripts are stored in version control just like any other sources, with tracking of changes, diff-ing and check-in reviews.
For a more detailed discussion and some examples, see Version Control and your Database.
Option (1). Each developer can have their own up to date local copy of the DB. (Up to date meaning, recreated from latest version controlled scripts (base + incremental changes + base data + run data). In order to make this work you should have the ability to 'one-click' deploy any database locally.
You really cannot go wrong with a tool like Visual Studio Database Edition. This is a version of VS that manages database schemas and much more, including deployments (updates) to target server(s).
VSDE integrates with TFS so all your database schema is under TFS version control. This becomes the "source of truth" for your schema management.
Typically developers will work against a local development database, and keep its schema up to date by synchronizing it with the schema in the VSDE project. Then, when the developer is satisfied with his/her changes, they are checked into TFS, and a build and then deployment can be done.
VSDE also supports refactoring, schema compares, data compares, test data generation and more. It's a great tool, and we use it to manage our schemas.
In a previous company (which used Agile in monthly iterations), .sql files were checked into version control, and (an optional) part of the full build process was to rebuild the database from production then apply each .sql file in order.
At the end of the iteration, the .sql instructions were merged into the script that creates the production build of the database, and the script files moved out. So you're only applying updates from the current iteration, not going back til the beginning of the project.
Have you looked at a product called DB Ghost? I have not personally used it but it looks comprehensive and may offer an alternative as part point 4 in your question.