Script for pushing database change to servers - sql-server

This is probably the most classic database problem.
I have an E-commerce software solution hosted on a SQL server for data, and a web server for the frontend. Every instance/customer has its own database on SQL Server 2008.
During development of the next version, I might change or add tables, views, stored procedures etc.
How do I publish this change to all databases, without losing data? It should be done via a script or something similar. Centralized management is the key...

Perhaps it's something you've already considered, but my company uses software from Red Gate (http://www.red-gate.com/) which compares our development version of the DB and the production one, generates and executes scripts to bring production on par with development.
(I'm not a sales person from Red Gate, but I think this might be what you're looking for)

I use SQL Compare for schema changes and SQL Data Compare for data changes. Works like a charm!

This problem is essentially one of automating the manual process of logging on to a SQL Server, and running a script against one or more databases, that does the modifications you need.
It's made worse, of course, if the instances of SQL Server that you need to update are remote from you, and therefore not directly accessible.
It's also vital to ensure that the scripts are applied in sequence - it would be no point running the "add index" script before the "create table" script.
The way we've solved this is with a web service that packages script files as datasets, and delivers them in the correct sequence to the remote systems when they call home.
On the remote SQL Server, we have a .NET application which calls the web service, downloads the script files, unpacks them and applies them to the database.
When the remote system calls in, it supplies the ID of the most recent upgrade it has. When the web service completes, it knows the last one it delivered. It's therefore trivial to know what level the remote systems are at.
The only manual intervention required is to create the scripts in the first place, and upload them to the central server.

A script should be executed on sql server machine by db admin.
Main algorithm of such scipt is about to create backup, lock table each table in loop, alter it, release.

Another poster mentioned the Red Gate products, and I'll throw another commercial product out there - Quest Change Director:
http://www.quest.com/change-director-for-sql-server/
Disclaimer: I work for Quest, although I'm not in sales. Change Director does comparisons, syncing, links to a change management system, can use your dev/qa server as a source or use T-SQL scripts, has an audit trail and rollback capabilities, etc.
Like you said, central management is key, and this product focuses on that.

Related

Automating SQL Server Dev database refresh from Prod

Seeking some advice before I dig in. Let me explain the ask and my current process.
We currently have Dev teams who require refreshed data from the Prod databases placed into there DEV databases. There requirements change, sometimes they just need tables and other times they need different subsets of the below
Tables
Views
Stored Procedures
Users
Schemas
Currently the process is completely manual and is as outlined below
Disable Job responsible for replication of Prod DB (actually standby)
Highlight Prod DB and "Generate Scripts"
Select the Options that is required (see above e.g. tables views etc..)
Backup Dev DB (just in case)
sp_msforeachtable and drop each table from dev db
Execute the script that was generated from step 2 on dev db
Then use the import wizard to pull the data from the prod source
An additional sql script is often required to run post import on new Dev DB (scrubber)
Enable Job on prod for repl
The SQL Server instance hosts can change as can the databases so variables will need to be passed. The SQL Servers are 2008 on Windows 2008. The box that I host the script/instance on can be any version of windows and any version of SQL Server.
I'm hoping to automate this process, at first just for the SA teams (so could be ps or cli). Eventually (hopefully sooner than later) however present this in some type of ui to the dev teams so they can manage themselves.
I'd prefer if this all runs from a management box running SQL Server and not the SQL Server instances that host the databases. I'm not sure what options are available but I suspect SSIS could be used or PowerShell and SMO and I'm sure there are other crude ways.
I'd like this to be somewhat elegant so it's easily presentable to management. I'm comfortable with PowerShell and SQL but have no experience with SSIS.
Anyway looking for some suggestions.
EDIT:
So my Requirements have actually changed. I now need to scrub the data then backup then post to a share for dev. I'm nearing completion of my script, which is powershell using SMO. I'll give a brief description below and when I'm complete I'll post more details. Prod is over wan, as are the backups. We have have log shipping enabled to my site which is the data I have to work with. Steps are probably going to make some gringe but its necessary because by db is standby.
Create new database by looping through source DB using smo for files/file settings
Backup the newly create database with standby / readonly
Stop Job for log shipping to source db
Take source db offline
Take newly create db offline
Replace newly created DB files with source db files
Bring both DB's back on line
Start Job for log shipping to source db
Restore new created / newly copied file DB with recovery
Execute .sql to scrub new DB
Backup new DB
Copy to share for dev
So thats it, all sql related work is done through SMO. I'm pretty much done, I've built out functions for each step which all work, I just have to pull it all together.
Not pretty but does the job...damn that wan!
EDIT 2:
I ended up backing up, copying local, scrubbing, backing up again with compression than copying across that WAN overnight. I did this all through task scheduler / ps / SMO.
Thanks to all that offered advise
"Automating" what you describe sounds very ambitious - probably impractical given the complexity of the requirements. I would aim for "streamlining" a process rather than total automation.
I would use the SQL Server Data Tools (SSDT) for steps 2, 3, 5 & 6. It can generate and run alter scripts for you, based on a schema compare.
https://msdn.microsoft.com/en-au/data/tools.aspx
You typically start with SSDT by "Importing" the entire database schema into a Visual Studio Project. You can then use the "Schema Compare" tool to see what the differences are between your project and various database environments. Combined with source control (e.g. Visual Studio Online), this will give you a much better handle on what the changes are between environments, avoiding surprises.
I like the import wizard for 7 - I would save the generated SSIS packages and edit them to cover step 8 (if there is ever any reuse). SSDT's Schema Compare tool can tell you if there are any changes, implying you should regenerate the SSIS package for that table.
Needing to ship schema changes from Prod to Dev is presumably a red flag that developers/dbas are risking Prod-first changes. SSDT will help you monitor and quantify that.
I would agree with Mike Honey, there are some big red flags here that something isn't right.
To answer the specific question, if you need to have production data from all of your tables (even a subset) I would personally backup and restore the production data on whatever your schedule is (nightly?) - you should have a backup already so restoring it should be simple.
Once it has been restored you can delete off bits you don't want and if there is any personally identifiable information anonymize it!
Once you are happy with the data you can apply the latest dacpac from the SSDT project to make sure it has all of the developer changes.
This approach has two benefits, firstly it is simpler than copying all of the tables individually and secondly it tests your deployment process pretty effectively for when you do go to production.
That being said, re the big red flags! I would really question why you need production data, the way it should generally work is that each developer has a test database with little to no data in but unit tests that verify that everything works - if you need more data for performance testing then use something like the redgate sql data generator tool to generate realistic test data rather than full production data.
I have seen cases where it was thought production data was required but it actually wasn't - if it is too difficult to produce realistic test data, that in itself is a sign of some bad design etc - of course every environment it unique so maybe they do need it!
ed

How to update an already published database?

I have a web application that has an SQL database.
For clarity I'm using Asp.Net 4.0/c#/SQL Server 2008 Web edition.
I recently puclished the site, which was my first, by creating a deployment package for the database.
Now a couple of months down the line, I need to update the database structure. The web application now has data that has been entered via the web, so i'll need to update the structure, then copy data across.
As this is the first time I've done it, I'm unsure of the process I should follow - is there a standard practice for this kind of update?
Also, since some of the tables use incremental ID's I need to ensure they remain the same in the newly updated database.
Any tips, links, advice appreciated.
Important Guidelines:
I assume you have not changed structure entirely (means keys column are same though solution is around for that too)
Steps are as follows:
Take export of the database
Add or remove the columns or whatever changes you want
Import the database back
Check the log for rows/tables (if some) were not updated successfully
Make SQL queries for them and run them to sync
Here are some general steps for this:
Take backup of your online database and restore it locally
Modify local database to suite your needs
Use third party comparison and synchronization tool to publish changes to your production database
There are many of these available and you can use them in trial mode to get the job done if you’re on a tight budget. You can try tools from Red Gate, ApexSQL, Idera, Dev Art and others…

Database source control vs. schema change scripts

Building and maintaining a database that is then deplyed/developed further by many devs is something that goes on in software development all the time. We create a build script, and maintain further update scripts that get applied as the database grows over time. There are many ways to manage this, from manual updates to console apps/build scripts that help automate these processes.
Has anyone who has built/managed these processes moved over to a Source Control solution for database schema management? If so, what have they found the best solution to be? Are there any pitfalls that should be avoided?
Red Gate seems to be a big player in the MSSQL world and their DB source control looks very interesting:
http://www.red-gate.com/products/solutions_for_sql/database_version_control.htm
Although it does not look like it replaces the (default) data* management process, so it only replaces half the change management process from my pov.
(when I'm talking about data, I mean lookup values and that sort of thing, data that needs to be deployed by default or in a DR scenario)
We work in a .Net/MSSQL environment, but I'm sure the premise is the same across all languages.
Similar Questions
One or more of these existing questions might be helpful:
The best way to manage database changes
MySQL database change tracking
SQL Server database change workflow best practices
Verify database changes (version-control)
Transferring changes from a dev DB to a production DB
tracking changes made in database structure
Or a search for Database Change
I look after a data warehouse developed in-house by the bank where I work. This requires constant updating, and we have a team of 2-4 devs working on it.
We are fortunate because there is only the one instance of our "product", so we do not have to cater for deploying to multiple instances which may be at different versions.
We keep a creation script file for each object (table, view, index, stored procedure, trigger) in the database.
We avoid the use of ALTER TABLE whenever possible, preferring to rename a table, create the new one and migrate the data over. This means that we don't have to look through a history of ALTER scripts - we can always see the up to date version of every table by looking at its create script. The migration is performed by a separate migration script - this can be partly auto-generated.
Each time we do a release, we have a script which runs the create scripts / migration scripts in the appropriate order.
FYI: We use Visual SourceSafe (yuck!) for source code control.
I've been looking for a SQL Server source control tool - and came across a lot of premium versions that do the job - using SQL Server Management Studio as a plugin.
LiquiBase is a free one but i never quite got it working for my needs.
There is another free product out there though that works stand along from SSMS and scripts out objects and data to flat file.
These objects can then be pumped into a new SQL Server instance which will then re-create the database objects.
See gitSQL
Maybe you're asking for LiquiBase?

How do you Move Dev Database Changes to Production Database?

I have been working on a project and gotten it through the first stage. However, the requirments ended up changing and I have to add new tables and redo some of the foriegn key references in the DB.
The problem I have is my lack of knowledge of dealing with doing this kind of change to a staging then production database once I get the development done on dev database.
What are some strategies for migrating database schema changes and maintaining data in the database?
About as far as my knowledge is on doing this is open up Sql Server Management Studio and starting adding tables manually. I know this is probably a bad way to do it so looking for how to do it properly while realizing I probably started out wrong.
For maintaining schema changes you can use ApexSQL Diff, a SQL Server and SQL Azure schema comparison and synchronization tool, and for maintaining data in the database you can use ApexSQL Data Diff, a SQL Server and SQL Azure data comparison and synchronization tool.
Hope this helps
Disclaimer: I work for ApexSQL as a Support Engineer
You have to have something called as a "KIT". Obviously, if you are maintaining some kind of a source control, all the scripts for the changes that you do in the development environments should be maintained in the source control configuration tool.
Once you are done with all the scripts/changes that you deem certified to move to next higher environment. Prepare the kit with having all these scripts in folders (ideally categorized as Procedures, Tables, Functions, Bootstraps) And then have a batch files that could execute these scripts in the kit in a particular order using OSQL command line utility.
Have separate batch files for UAT/ Staging/ production so that you can just double click on the batch file to execute the kit in the appropriate server. Check for OSQL options.
This way all your environments are in sync!
I typically use something like the SQL Server Publishing Wizard to produce SQL scripts of the changes. That is a rather simple and easy approach. The major downside with that tool is that the produced will drop and recreate tables that are not changed but used by procedures that have changed (and I can't understand why), so there is some manual labour involved in going through the script and remove things that don't need to be there.
Note that you don't need to download and install this tool; you can launch it from within Visual Studio. Right-click on a connection in the Server Explorer and select "Publish to Provider" in the context menu.
Red Gate SQL Compare and SQL Data Compare all the way. Since my company bought it, it saved me tons of time staging our databases from DEV to TEST to ACCEPTANCE to PRODUCTION.
And you can have it synchronize with a scripts folder too for easy integration in a source control system.
http://www.red-gate.com
You might want to check out a tool like Liquibase: http://liquibase.org/
You can use visual studio 2015. Go to Tools=> SQL server => New Schema comparison
step 1) Select source and target Database.
Click on Compare option.
step 2) once comparison completed, you can click on icon Generate Script(Shift+alt+G)
this will generate Commit script.
step 3) To generate rollback script for database changes just swap database from step 1
There are some tools available to help you with that.
If you have Visual Studio Team edition, check database projects (aka DataDude aka Visual Studio Team for Database Professionals) See here and here
It allows you to generate a model from the dev/integration database and then (for many, but not all cases) automatically create scripts which update your prod database with the changes you made to dev/integration.
For VS 2008, make sure you get the GDR2 patches.
We have found the best way to push changes is to treat databases changes like code. All changes are in scripts, they are in source control and they are part of a version. Nothing is ever under any circumstances pushed to prod that is not scripted and in source control. That way you don't accidentally push changes that are in dev, but not yet ready to be pushed to prod. Further you can restore prod data to the dev box and rerun all the scripts not yet pushed and you have fresh data and all the dev work preserved. This also works great when you have lookup values to tables that are chaging that you don;t want pushed to prod until other things move as well. Script the insert and put it with the rest of the code for the version.
It's nice to use those tools to do a compare to see if something is missed in the scripts, but I would NEVER rely on them alone. Far too much risk of pushing something "not yet ready for prime time" to prod.
A good database design tool (such as Sybase Powerdesigner) will allow you to create the design changes to the data model, then generate the code to implement those changes. You can then store and run the code as you choose. This tool should also be able to do reverse engineering when you inherit a database you didn't build.
Finding all the changes between development and production is often difficult even in an organized, well-documented environment. Idera has a tool for SQL Server which will detect structural differences between your development and production database and another tool which detects changes in the data. In fact, I often use these to go the other direction and sync development with production to start a new project.

Transferring changes from a dev DB to a production DB

Say I have a website and a database of that website hosted locally on my computer (for development) and another database hosted (for production)...ie first I do the changes on the dev db and then I do the changes to the prod DB.
What is the best way to transfer the changes that I did on the local database to the hosted database?
If it matters, I am using MS Sql Server (2008)
The correct way to do this with Visual Studio and SQL Server is to add a Database Project to the web app solution. The database project should have SQL files that can recreate the entire database completely on a new server along with all the necessary tables, procedures users and roles.
That way, they are included in the source control for all the rest of the code as well.
There is a Changes sub-folder in the Database Project where I put the SQL files that apply any new alterations or additions to the database for subsequent versions.
The SQL in the files should be written with proper "if exists" blocks such that it can be run safely multiple times on an already updated database without error.
As a rule, you should never make your changes directly in the database - instead modify the SQL script in the project and apply it to the database to make sure your source code (the SQL files) is always up to date.
We do this in the (Ruby on) Rails world by writing "migrations," which capture the changes you make to the DB structure at each point. These are run with a migration tool (a task for rake), which also writes to a DB table so it knows whether a particular migration has been run or not.
You could make a structure like this for your dev platform (.Net?), but I think that in other answers to this question people will suggest available tools for handling database versioning in your development platform, or perhaps for your specific DB.
I don't know any of these, but check out this list. I see a lot of pay things out there, but there must be something free. Also check this out.
I migrate changes via change scripts written by developers when they have tested/verified their changes. (The exception being moving large data.) All scripts are stored in a Source control system. and can be verified by DBAs.
It is manual, sometime time consuming but effective, safe and controled process.
Databases are too vital to copy from dev.
There are tools to help create/verify these scripts.
See http://www.red-gate.com/
I have used their tools to compare 2 databases to create scripts.
Brian
If the changes are small, I sometimes make them by hand. For larger changes, I use Red Gate's SQL Compare to generate change scripts. These are hand-verified and run in the QA environment first to make sure they don't break anything. For large changes, we run a special backup prior to making the change both in QA and in production.
We used to use the approach provided by Ron. It makes sense for a big project with dedicated team of DBAs. But if you do not have a dedicated developers who write code only for DB this approach is time and resource expensive.
The approach to use RedGate DB compare is also not good. You still have a do a lot of manual work you can skip some step by mistake.
It needs something better. This is was the reason why we built the "Agile DB Recreation/Import/Reverse/Export tool"
The tool is free.
Advantages: your developers use any prefered tools to develop DEV DB. Then they run the DB RIRE and it makes reverseengeniring DB (tables, views, stor proc, etc) and export data into XML files. XML files you can keep in the any code repository system.
And the second step is to run DB RIRE one more time to generate difference scripts between structure and data in XML files and in Production DB.
Of course you can make as much iterations as you need.

Resources