Is it bad practice to run an UPDATE script in Project Server - sql-server

I am using the Project Server application which sits on top of a database. I have hundreds of edits to make to Project Server but It’s unrealistic to edit all of the Projects, one at a time. Instead of doing it one at a time, I was thinking of running an UPDATE sql script to write directly to the database.
Is this considered bad practice and would it break project server?

As soon as you do any writing on the DB by yourself or touch the schema you just lost your support from microsoft. They reserve the right to do any change in the db with each update.
Do not do it. Don't.
Use PSI or the REST API. Those are your friends.

Related

How to manage Database First Project

We are working on a project that uses database first model.
At the beginning of the project we decided to use a central db, all developers were using this db.
But at the middle of the project, developers decided to use own DB's, they are cloning the DB to their local server. So, they are using and changing in their local, but migrating the DB changes to centralized DB is so hard, we are doing it by hand.
Is there a good way to manage db changes to centralized db?
but migrating the DB changes to centralized DB is so hard, we are doing it by hand.
And it never occured to you that this will not scale? We run a 4 people team with now 9 environments - it is impossible to do so by hand. On top you WILL make errors, and if those happen during production deploy that is bad bad bad.
That said, this is a problem solved like 50 years ago. Learn from history. Here is our approach:
DB Change scripts.
In folders numbered by sprint
There is a table SchemaSync in every db tracknig which script executed and the checksum
Missing/changed scripts are executed during deploy or manually (from a zip file) via powershell cmdlet.
This is seriously a standard solution predating SSDT and migrations for ages and likely older than you are. There are even open source implementations https://github.com/DbUp/DbUp and commercial libraries (one of them actually IN VISUAL STUDIO so you can try it out) https://www.red-gate.com/products/sql-development/sql-change-automation/
Like the only way to go.
SSDT: Loves dropping fields. How do you handle db migrations with that? Posible with multiple steps? Not possible. SSDT idea of change scripts is ignorant to data moves. If you ever do a larger refactoring (which I happen to do a lot) then you know that sometimes it is not "create new field" but a number of steps that partially run complex code. SSDT totally can not handle it.

Possible to keep both DB Schema and *data* from "Application Config Tables" in synch between work and home PC's via source control?

I'm working with:
VS2013 Professional, Microsoft SQL Server 2012 - 11.0.5058.0 (X64)
I have kind of a two part question. What I'm wanting to achieve is: I want to, as seamlessly as possible, to be able to work on the same project on my work PC and home PC. As of right now, I am using online hosted Subversion for source control which is working fine for application code. The part I have no control over at the moment is the database. I would like if I could get "all" database changes made at either work or home to synch to my other machine.
By database changes, I mean:
Schema Changes
Data within specific "Application" tables (I obviously
do not intend to synch data in all tables)
I followed this just to test getting a DB schema into my project and under source control:
https://msdn.microsoft.com/en-us/library/aa833194%28v=vs.100%29.aspx
It seems to work fine. However, that covers schema changes when working on one machine. If I then go home and want to:
either build from new or update changes to the schema on my home machine, or
update data in base "Application" tables
...I have no clue how to do that, or if it is even possible?
I would think there should be a simple (ha!) way for making the schema changes flow through easily?
But changes to app tables might be harder - I'm happy to write a sql script to manage that, but I'd like to be able to have that script automatically run when I do a "refresh" my local copy of the database.
For schema changes, there are good blogs out there on using SSDT/DataDude/VS DB Projects. Jamie Thomson has written quite a few times on his experiences. I've written up my experiences here: http://schottsql.blogspot.com/2013/10/all-ssdt-articles.html
For data - you can use the native "Data Compare" option under the "SQL" menu in SSDT. It's not perfect, but it can help. Overall, though, what you'd want is one of a couple of things:
1. Extract data from the shared system, write a task to populate that - batch files w/ BCP, SSIS, or some apps that can actually generate T-SQL for you.
2. Write it yourself, being sure to guard against attempts to insert duplicate data and ensuring the key values remain unchanged.
3. Buy a copy of Red-Gate's SQL Data Compare Pro. You can save the compare options and can then execute those through the command line.
If you need this for multiple developers, option 1 or 2 is probably the best way to go, though you can use SQL Data Compare to get you started with a pretty good script. You should also be able to use something like Mladen Prajdic's SSMS Tools Pack to script result sets to T-SQL inserts that you could re-use.
If you use one of those options and combine it with a post-deploy script (maybe even one that only runs if this is a "new" build), you should be off to a good start.

How to update an already published database?

I have a web application that has an SQL database.
For clarity I'm using Asp.Net 4.0/c#/SQL Server 2008 Web edition.
I recently puclished the site, which was my first, by creating a deployment package for the database.
Now a couple of months down the line, I need to update the database structure. The web application now has data that has been entered via the web, so i'll need to update the structure, then copy data across.
As this is the first time I've done it, I'm unsure of the process I should follow - is there a standard practice for this kind of update?
Also, since some of the tables use incremental ID's I need to ensure they remain the same in the newly updated database.
Any tips, links, advice appreciated.
Important Guidelines:
I assume you have not changed structure entirely (means keys column are same though solution is around for that too)
Steps are as follows:
Take export of the database
Add or remove the columns or whatever changes you want
Import the database back
Check the log for rows/tables (if some) were not updated successfully
Make SQL queries for them and run them to sync
Here are some general steps for this:
Take backup of your online database and restore it locally
Modify local database to suite your needs
Use third party comparison and synchronization tool to publish changes to your production database
There are many of these available and you can use them in trial mode to get the job done if you’re on a tight budget. You can try tools from Red Gate, ApexSQL, Idera, Dev Art and others…

Transferring changes from a dev DB to a production DB

Say I have a website and a database of that website hosted locally on my computer (for development) and another database hosted (for production)...ie first I do the changes on the dev db and then I do the changes to the prod DB.
What is the best way to transfer the changes that I did on the local database to the hosted database?
If it matters, I am using MS Sql Server (2008)
The correct way to do this with Visual Studio and SQL Server is to add a Database Project to the web app solution. The database project should have SQL files that can recreate the entire database completely on a new server along with all the necessary tables, procedures users and roles.
That way, they are included in the source control for all the rest of the code as well.
There is a Changes sub-folder in the Database Project where I put the SQL files that apply any new alterations or additions to the database for subsequent versions.
The SQL in the files should be written with proper "if exists" blocks such that it can be run safely multiple times on an already updated database without error.
As a rule, you should never make your changes directly in the database - instead modify the SQL script in the project and apply it to the database to make sure your source code (the SQL files) is always up to date.
We do this in the (Ruby on) Rails world by writing "migrations," which capture the changes you make to the DB structure at each point. These are run with a migration tool (a task for rake), which also writes to a DB table so it knows whether a particular migration has been run or not.
You could make a structure like this for your dev platform (.Net?), but I think that in other answers to this question people will suggest available tools for handling database versioning in your development platform, or perhaps for your specific DB.
I don't know any of these, but check out this list. I see a lot of pay things out there, but there must be something free. Also check this out.
I migrate changes via change scripts written by developers when they have tested/verified their changes. (The exception being moving large data.) All scripts are stored in a Source control system. and can be verified by DBAs.
It is manual, sometime time consuming but effective, safe and controled process.
Databases are too vital to copy from dev.
There are tools to help create/verify these scripts.
See http://www.red-gate.com/
I have used their tools to compare 2 databases to create scripts.
Brian
If the changes are small, I sometimes make them by hand. For larger changes, I use Red Gate's SQL Compare to generate change scripts. These are hand-verified and run in the QA environment first to make sure they don't break anything. For large changes, we run a special backup prior to making the change both in QA and in production.
We used to use the approach provided by Ron. It makes sense for a big project with dedicated team of DBAs. But if you do not have a dedicated developers who write code only for DB this approach is time and resource expensive.
The approach to use RedGate DB compare is also not good. You still have a do a lot of manual work you can skip some step by mistake.
It needs something better. This is was the reason why we built the "Agile DB Recreation/Import/Reverse/Export tool"
The tool is free.
Advantages: your developers use any prefered tools to develop DEV DB. Then they run the DB RIRE and it makes reverseengeniring DB (tables, views, stor proc, etc) and export data into XML files. XML files you can keep in the any code repository system.
And the second step is to run DB RIRE one more time to generate difference scripts between structure and data in XML files and in Production DB.
Of course you can make as much iterations as you need.

Deploying SQL Server Databases from Test to Live

I wonder how you guys manage deployment of a database between 2 SQL Servers, specifically SQL Server 2005.
Now, there is a development and a live one. As this should be part of a buildscript (standard windows batch, even do with current complexity of those scripts, i might switch to PowerShell or so later), Enterprise Manager/Management Studio Express do not count.
Would you just copy the .mdf File and attach it? I am always a bit careful when working with binary data, as this seems to be a compatiblity issue (even though development and live should run the same version of the server at all time).
Or - given the lack of "EXPLAIN CREATE TABLE" in T-SQL - do you do something that exports an existing database into SQL-Scripts which you can run on the target server? If yes, is there a tool that can automatically dump a given Database into SQL Queries and that runs off the command line? (Again, Enterprise Manager/Management Studio Express do not count).
And lastly - given the fact that the live database already contains data, the deployment may not involve creating all tables but rather checking the difference in structure and ALTER TABLE the live ones instead, which may also need data verification/conversion when existing fields change.
Now, i hear a lot of great stuff about the Red Gate products, but for hobby projects, the price is a bit steep.
So, what are you using to automatically deploy SQL Server Databases from Test to Live?
I've taken to hand-coding all of my DDL (creates/alter/delete) statements, adding them to my .sln as text files, and using normal versioning (using subversion, but any revision control should work). This way, I not only get the benefit of versioning, but updating live from dev/stage is the same process for code and database - tags, branches and so on work all the same.
Otherwise, I agree redgate is expensive if you don't have a company buying it for you. If you can get a company to buy it for you though, it really is worth it!
For my projects I alternate between SQL Compare from REd Gate and the Database Publishing Wizard from Microsoft which you can download free
here.
The Wizard isn't as slick as SQL Compare or SQL Data Compare but it does the trick. One issue is that the scripts it generates may need some rearranging and/or editing to flow in one shot.
On the up side, it can move your schema and data which isn't bad for a free tool.
Don't forget Microsoft's solution to the problem: Visual Studio 2008 Database Edition. Includes tools for deploying changes to databases, producing a diff between databases for schema and/or data changes, unit tests, test data generation.
It's pretty expensive but I used the trial edition for a while and thought it was brilliant. It makes the database as easy to work with as any other piece of code.
Like Rob Allen, I use SQL Compare / Data Compare by Redgate. I also use the Database publishing wizard by Microsoft. I also have a console app I wrote in C# that takes a sql script and runs it on a server. This way you can run large scripts with 'GO' commands in it from a command line or in a batch script.
I use Microsoft.SqlServer.BatchParser.dll and Microsoft.SqlServer.ConnectionInfo.dll libraries in the console application.
I work the same way Karl does, by keeping all of my SQL scripts for creating and altering tables in a text file that I keep in source control. In fact, to avoid the problem of having to have a script examine the live database to determine what ALTERs to run, I usually work like this:
On the first version, I place everything during testing into one SQL script, and treat all tables as a CREATE. This means I end up dropping and readding tables a lot during testing, but that's not a big deal early into the project (since I'm usually hacking the data I'm using at that point anyway).
On all subsequent versions, I do two things: I make a new text file to hold the upgrade SQL scripts, that contain just the ALTERs for that version. And I make the changes to the original, create a fresh database script as well. This way an upgrade just runs the upgrade script, but if we have to recreate the DB we don't need to run 100 scripts to get there.
Depending on how I'm deploying the DB changes, I'll also usually put a version table in the DB that holds the version of the DB. Then, rather than make any human decisions about which scripts to run, whatever code I have running the create/upgrade scripts uses the version to determine what to run.
The one thing this will not do is help if part of what you're moving from test to production is data, but if you want to manage structure and not pay for a nice, but expensive DB management package, is really not very difficult. I've also found it's a pretty good way of keeping mental track of your DB.
If you have a company buying it, Toad from Quest Software has this kind of management functionality built in. It's basically a two-click operation to compare two schemas and generate a sync script from one to the other.
They have editions for most of the popular databases, including of course Sql Server.
I agree that scripting everything is the best way to go and is what I advocate at work. You should script everything from DB and object creation to populating your lookup tables.
Anything you do in UI only won't translate (especially for changes... not so much for first deployments) and will end up requiring a tools like what Redgate offers.
Using SMO/DMO, it isn't too difficult to generate a script of your schema. Data is a little more fun, but still doable.
In general, I take "Script It" approach, but you might want to consider something along these lines:
Distinguish between Development and Staging, such that you can Develop with a subset of data ... this I would create a tool to simply pull down some production data, or generate fake data where security is concerned.
For team development, each change to the database will have to be coordinated amongst your team members. Schema and data changes can be intermingled, but a single script should enable a given feature. Once all your features are ready, you bundle these up in a single SQL file and run that against a restore of production.
Once your staging has cleared acceptance, you run the single SQL file again on the production machine.
I have used the Red Gate tools and they are great tools, but if you can't afford it, building the tools and working this way isn't too far from the ideal.
I'm using Subsonic's migrations mechanism so I just have a dll with classes in squential order that have 2 methods, up and down. There is a continuous integration/build script hook into nant, so that I can automate the upgrading of my database.
Its not the best thign in the world, but it beats writing DDL.
RedGate SqlCompare is a way to go in my opinion. We do DB deployment on a regular basis and since I started using that tool I have never looked back.
Very intuitive interface and saves a lot of time in the end.
The Pro version will take care of scripting for the source control integration as well.
I also maintain scripts for all my objects and data. For deploying I wrote this free utility - http://www.sqldart.com. It'll let you reorder your script files and will run the whole lot within a transaction.
I agree with keeping everything in source control and manually scripting all changes. Changes to the schema for a single release go into a script file created specifically for that release. All stored procs, views, etc should go into individual files and treated just like .cs or .aspx as far as source control goes. I use a powershell script to generate one big .sql file for updating the programmability stuff.
I don't like automating the application of schema changes, like new tables, new columns, etc. When doing a production release, I like to go through the change script command by command to make sure each one works as expected. There's nothing worse than running a big change script on production and getting errors because you forgot some little detail that didn't present itself in development.
I have also learned that indexes need to be treated just like code files and put into source control.
And you should definitely have more than 2 databases - dev and live. You should have a dev database that everybody uses for daily dev tasks. Then a staging database that mimics production and is used to do your integration testing. Then maybe a complete recent copy of production (restored from a full backup), if that is feasible, so your last round of installation testing goes against something that is as close to the real thing as possible.
I do all my database creation as DDL and then wrap that DDL into a schema maintainence class. I may do various things to create the DDL in the first place but fundamentally I do all the schema maint in code. This also means that if one needs to do non DDL things that don't map well to SQL you can write procedural logic and run it between lumps of DDL/DML.
My dbs then have a table which defines the current version so one can code a relatively straightforward set of tests:
Does the DB exist? If not create it.
Is the DB the current version? If not then run the methods, in sequence, that bring the schema up to date (you may want to prompt the user to confirm and - ideally - do backups at this point).
For a single user app I just run this in place, for a web app we currently to lock the user out if the versions don't match and have a stand alone schema maint app we run. For multi-user it will depend on the particular environment.
The advantage? Well I have a very high level of confidence that the schema for the apps that use this methodology is consistent across all instances of those applications. Its not perfect, there are issues, but it works...
There are some issues when developing in a team environment but that's more or less a given anyway!
Murph
I'm currently working the same thing to you. Not only deploying SQL Server databases from test to live but also include the whole process from Local -> Integration -> Test -> Production. So what can make me easily everyday is I do NAnt task with Red-Gate SQL Compare. I'm not working for RedGate but I have to say it is good choice.

Resources