Manual Database Deployments to SSDT On An Existing Database in VSTS - database

We have a Live database project for which we currently update via manual deployments only. As part of our process improvement initiatives however, we're proposing to switch to SSDT to enable us improve and introduce some automation into our deployment process.
To facilitate a Proof of Concept, we've provisioned a test environment where we can implement and test the SSDT deployments to a clone or like-for-like representation of the existing Live database which is currently maintained in a Git repo on VSTS.
I do however have a few of concerns and queries relating to this.
What would be the recommended way to clone or re-create the Live database for our SSDT deployments to the test environment, without impacting the Live environment?
Will importing the Live database into our SSDT project and then publishing to the test environment have any adverse effect on the Live environment and can we expect any inconsistencies between the two databases following the import?
Should our Proof of Concept prove successful, will we need to migrate all our database assets to a new SSDT project, or can the SSDT project run in parallel with our existing Live database project and how?
Will any switch or migration to SSDT result in a downtime or outage for our project team?
If a migration of our existing database assets to a new SSDT project is not required, could we then integrate the SSDT functionality into the existing project.

Let's go through all your questions:
What do you mean by clone? Copy structure or with data also? In any case you'll need to import schema. Create new project and import the schema by right-clicking on the project. If you need data also, then the best way is to backup/restore.
SSDT doesn't naturally works with data. It has just DDLs. So when you sync the structure performance effect depends on the changes be made. Index re-creating can take a lot of time, however table creation will not use too much resources. Once again, if you are talking about backups, then you need to measure how much they affect you.
SSDT is the way how you store and develop your code. It's up to you to go with or not. My opinion is that SSDT is the best tool for SQL Server development, however it can become a challenge to set it up correctly.
As I said already that it's all depends on the changes. Some changes will not have downtime other ones can be very complex. SSDT will just generate the SQL script and there is no difference will it be ran manually or automatically by SSDT.
Not very sure what do you mean by migration. SSDT is the way how do you store and develop your code. Any SQL Server database can be put to SSDT (for some existing databases you'll need to put some effort to get into SSDT, but it's really possible)

Related

Keeping a SQL Server Database Project and the Actual Database (in SSMS) in sync during development

I am trying to figure out what the normal workflow should be between a SQL Server Database Project and the actual database in SQL Server (SSMS) once they are in sync. The main reason I am using one of these projects is to keep everything in Source Control.
Here are the steps I took to set things up:
Create the actual database and entire schema in SQL Server Management Studio.
Create a new SQL Server Database Project in VS2015.
Right click the project and import the actual database.
Now that the project is in sync with the actual database, how should I go about making changes going forward? Do I make them in the database project and then republish it or do I make them in the schema? Basically what I am saying is that I am trying to avoid making schema changes during development in both places.
Generally the best practice will be to make changes to the project, check that into source control, and publish the changes to your database as needed. Your project then becomes your source. You can branch it, merge it, and do whatever else is needed to do your development. You won't be republishing your database, though. You'll update your existing one when you publish your project. It will do a diff between your project and the db, then make the appropriate updates/alters to bring your database in line w/ the project.
I blogged about my experiences with SSDT here: http://schottsql.blogspot.com/search/label/SSDT, but there are some other great resources available if you look. There are definitely different ways to do things and what I chose may not fit your environment exactly.

Deploying latest changes to production DB - without lose of data

I have been doing some research on the correct procedure to follow when working with both a development and live production database. The best article i have found was this one: Strategies for Database Development and Deployment, but i can not accept the idea that i have to maintain a Word document, manually, of every change i make to the DB. That seems ridiculous to me...
I use SQL Server Management Studio to manage my SQL databases both in dev and in prod. Is there a way to deploy latest changes to production WITHOUT destroying tables and data. Can someone please point me to a good procedural article on how this is done in SSMS.
Thanks
It is irresponsible to make changes to a database design without creating change scripts that are put into source control.
However, if you are already in this curcumstance, I suggest buying red_gate's SQLCompare. It will look at the two databases and script the differnces. You still can't run this willy-nilly though - sometimes you have made changes to the dev database that are not yet part of the curent version being pushed to prod and SQLCompare has no way to know this. It is far better to create the scripts as you go (Using alter table when the table currently exists so as not to disrupt existing data) and keep them in source control with the rest of code that you will be pushing at the same time.
The only right way to do it in prodiction - with or without Management Studio - prepare, check, test and run the scripts manually.
WITH FRESH BACKUP!
A common strategy is to keep an ordered set of change scripts, e.g. prefixed with date or database version, which can easily be tested on the development database by starting with a fresh backup from production. The change scripts can often be generated from SQL Server Management Studio when making changes, or could be crafted manually in case of more complex changes.
Another suggestion would be to version control the database definitions (tables, procedures etc). This can be easily achieved by using SQL Server Management Studio to generate create scripts for all objects after each update. This way you can easily compare changes made over time, or between different environments.

How to Develop TSQL in Visual Studio 2010 Database Projects

Silly sounding question, I know... Let me lay some groundwork first.
I have successfully created a database project comprised of the hundreds of tables, stored procedures, indexes, et.al. that make up our production database.
I have successfully added the solution to source control (TFS).
I have made a change (as a test) to some of the objects and generated a deployment script, and the whole system is very impressive, I must say. But it seems the strength of VS 2010, from a DB perspective is deployment, and not necessarily development.
I am totally baffled on the day-to-day workflow involved in database/TSQL development using Visual Studio. Let's suppose I need to add a few columns to a table, and modify related stored procedures to return/update this data for these columns.
While it's easy enough to modify all the scripts in my database model, I'd like to be able to isolate them against a dev database where I can do some testing... But it's as simple as not being to update a proc if it exists without manually changing the script to an ALTER (or adding DROP code prior to the CREATE). Having to do this once or twice is a non-issue, but in a real dev environment, we do this all day long.
Perhaps the answer is to perform frequent deployments to the dev server, as I debug and make changes to procs, for instance? Quite a bit of overhead; I could execute the necessary scripts manually in a few seconds, building and deploying takes a few minutes. Plus, if three of us are deploying different changes to a dev DB, wouldn't we overwrite each other's modifications?
Sorry to be so longwinded, but I can't help but think I am missing something simple here.
Are there any books/tutorials/webinars that showcase this type of approach to actual development?
I think you've hit the nail on the head. In order to test your modified stored procedures, you have to go through the deployment step to update your database. That's the drawback of the offline development model.
Here at Red Gate we've had numerous requests to make SQL Source Control support the Database Project, which would allow developers to benefit from the 'online' development model whilst still benefiting from the Database Project features.
[EDIT] We've added 'Beta' support for the database project in SQL Source Control, which allows connected SSMS development against the database project format. Simple link to the folder with eh .sqlproj file from SQL Source Control and start developing! [/EDIT]
In the meantime, you'll have to keep deploying to dev on a regular basis!
An alternative is to develop on a real database, and use the Schema Compare feature to synchronize back to your Database Project. Schema Compare is available in the Premium and Ultimate editions of Visual Studio.
David Atkinson
Product Manager
Red Gate Software

How do you Move Dev Database Changes to Production Database?

I have been working on a project and gotten it through the first stage. However, the requirments ended up changing and I have to add new tables and redo some of the foriegn key references in the DB.
The problem I have is my lack of knowledge of dealing with doing this kind of change to a staging then production database once I get the development done on dev database.
What are some strategies for migrating database schema changes and maintaining data in the database?
About as far as my knowledge is on doing this is open up Sql Server Management Studio and starting adding tables manually. I know this is probably a bad way to do it so looking for how to do it properly while realizing I probably started out wrong.
For maintaining schema changes you can use ApexSQL Diff, a SQL Server and SQL Azure schema comparison and synchronization tool, and for maintaining data in the database you can use ApexSQL Data Diff, a SQL Server and SQL Azure data comparison and synchronization tool.
Hope this helps
Disclaimer: I work for ApexSQL as a Support Engineer
You have to have something called as a "KIT". Obviously, if you are maintaining some kind of a source control, all the scripts for the changes that you do in the development environments should be maintained in the source control configuration tool.
Once you are done with all the scripts/changes that you deem certified to move to next higher environment. Prepare the kit with having all these scripts in folders (ideally categorized as Procedures, Tables, Functions, Bootstraps) And then have a batch files that could execute these scripts in the kit in a particular order using OSQL command line utility.
Have separate batch files for UAT/ Staging/ production so that you can just double click on the batch file to execute the kit in the appropriate server. Check for OSQL options.
This way all your environments are in sync!
I typically use something like the SQL Server Publishing Wizard to produce SQL scripts of the changes. That is a rather simple and easy approach. The major downside with that tool is that the produced will drop and recreate tables that are not changed but used by procedures that have changed (and I can't understand why), so there is some manual labour involved in going through the script and remove things that don't need to be there.
Note that you don't need to download and install this tool; you can launch it from within Visual Studio. Right-click on a connection in the Server Explorer and select "Publish to Provider" in the context menu.
Red Gate SQL Compare and SQL Data Compare all the way. Since my company bought it, it saved me tons of time staging our databases from DEV to TEST to ACCEPTANCE to PRODUCTION.
And you can have it synchronize with a scripts folder too for easy integration in a source control system.
http://www.red-gate.com
You might want to check out a tool like Liquibase: http://liquibase.org/
You can use visual studio 2015. Go to Tools=> SQL server => New Schema comparison
step 1) Select source and target Database.
Click on Compare option.
step 2) once comparison completed, you can click on icon Generate Script(Shift+alt+G)
this will generate Commit script.
step 3) To generate rollback script for database changes just swap database from step 1
There are some tools available to help you with that.
If you have Visual Studio Team edition, check database projects (aka DataDude aka Visual Studio Team for Database Professionals) See here and here
It allows you to generate a model from the dev/integration database and then (for many, but not all cases) automatically create scripts which update your prod database with the changes you made to dev/integration.
For VS 2008, make sure you get the GDR2 patches.
We have found the best way to push changes is to treat databases changes like code. All changes are in scripts, they are in source control and they are part of a version. Nothing is ever under any circumstances pushed to prod that is not scripted and in source control. That way you don't accidentally push changes that are in dev, but not yet ready to be pushed to prod. Further you can restore prod data to the dev box and rerun all the scripts not yet pushed and you have fresh data and all the dev work preserved. This also works great when you have lookup values to tables that are chaging that you don;t want pushed to prod until other things move as well. Script the insert and put it with the rest of the code for the version.
It's nice to use those tools to do a compare to see if something is missed in the scripts, but I would NEVER rely on them alone. Far too much risk of pushing something "not yet ready for prime time" to prod.
A good database design tool (such as Sybase Powerdesigner) will allow you to create the design changes to the data model, then generate the code to implement those changes. You can then store and run the code as you choose. This tool should also be able to do reverse engineering when you inherit a database you didn't build.
Finding all the changes between development and production is often difficult even in an organized, well-documented environment. Idera has a tool for SQL Server which will detect structural differences between your development and production database and another tool which detects changes in the data. In fact, I often use these to go the other direction and sync development with production to start a new project.

managing Sql Server databases version control in large teams

For the last few years I was the only developer that handled the databases we created for our web projects. That meant that I got full control of version management. I can't keep up with doing all the database work anymore and I want to bring some other developers into the cycle.
We use Tortoise SVN and store all repositories on a dedicated server in-house. Some clients require us not to have their real data on our office servers so we only keep scripts that can generate the structure of their database along with scripts to create useful fake data. Other times our clients want us to have their most up to date information on our development machines.
So what workflow do larger development teams use to handle version management and sharing of databases. Most developers prefer to deploy the database to an instance of Sql Server on their development machine. Should we
Keep the scripts for each database in SVN and make developers export new scripts if they make even minor changes
Detach databases after changes have been made and commit MDF file to SVN
Put all development copies on a server on the in-house network and force developers to connect via remote desktop to make modifications
Some other option I haven't thought of
Never have an MDF file in the development source tree. MDFs are a result of deploying an application, not part of the application sources. Thinking at the database in terms of development source is a short-cut to hell.
All the development deliverables should be scripts that deploy or upgrade the database. Any change, no matter how small, takes the form of a script. Some recommend using diff tools, but I think they are a rat hole. I champion version the database metadata and having scripts to upgrade from version N to version N+1. At deployment the application can check the current deployed version, and it then runs all the upgrade scripts that bring the version to current. There is no script to deploy straight the current version, a new deployment deploys first v0 of the database, it then goes through all version upgrades, including dropping object that are no longer used. While this may sound a bit extreme, this is exactly how SQL Server itself keeps track of the various changes occurring in the database between releases.
As simple text scripts, all the database upgrade scripts are stored in version control just like any other sources, with tracking of changes, diff-ing and check-in reviews.
For a more detailed discussion and some examples, see Version Control and your Database.
Option (1). Each developer can have their own up to date local copy of the DB. (Up to date meaning, recreated from latest version controlled scripts (base + incremental changes + base data + run data). In order to make this work you should have the ability to 'one-click' deploy any database locally.
You really cannot go wrong with a tool like Visual Studio Database Edition. This is a version of VS that manages database schemas and much more, including deployments (updates) to target server(s).
VSDE integrates with TFS so all your database schema is under TFS version control. This becomes the "source of truth" for your schema management.
Typically developers will work against a local development database, and keep its schema up to date by synchronizing it with the schema in the VSDE project. Then, when the developer is satisfied with his/her changes, they are checked into TFS, and a build and then deployment can be done.
VSDE also supports refactoring, schema compares, data compares, test data generation and more. It's a great tool, and we use it to manage our schemas.
In a previous company (which used Agile in monthly iterations), .sql files were checked into version control, and (an optional) part of the full build process was to rebuild the database from production then apply each .sql file in order.
At the end of the iteration, the .sql instructions were merged into the script that creates the production build of the database, and the script files moved out. So you're only applying updates from the current iteration, not going back til the beginning of the project.
Have you looked at a product called DB Ghost? I have not personally used it but it looks comprehensive and may offer an alternative as part point 4 in your question.

Resources