Migrating massive databases in SQL Server - sql-server

One of the tasks in a project that I'm working on is to migrate an existing database on SQL Server 2000, to a new server which runs SQL Server 2008. This database is extremely huge, with 23 million rows and a 78GB mdf file.
What is the best way to migrate a database of this size?
My current approach would be to:
allow for application downtime so that the application doesn't write records to the database
perform a full backup on SQL Server 2000.
move backup file over to new server across the network.
restore full backup on SQL Server 2008.
configure the application to refer to the database on the new server
restart application.
decommission the database on SQL Server 2000.
However, I'm not sure how much application downtime that would involve.
Are there any easier approaches, or an approach that involves very little downtime? Can a backup be taken while the application is running? Obviously I would need to stop the application when the backup file is transferred and the restore is completed. Interested to hear your approaches to a task like this.

If you're open to downtime:
Detach the database
Copy data file(s) and log file(s) to the new server
Attach the database on the new server instance
Detaching closes the database and finalizes the files so they safely can be moved (or backed up via filesystem backup). It will no longer be accessible on the server instance until you reattach it.
Don't cut and paste / move the data and log files, just in case something bombs during the copy.
There are several other things to keep in mind when migrating to a new server instance, like making sure logins exist on the new instance, features in use that might be deprecated, etc.
Here's the reference for detach/attach.

Related

Copy SQL Server database between two servers

I have a productive SQL Server database on one server. Also on this server there are the databases for test and development, which was over a long period no big deal for performance. But now I have to test some quite expensive selects for a nightly report and so my users get heavy trouble using the database during working hours.
We have a redundant server with SQL Server running which we never really used, but now we think his time has come. Until now, to get a fresh copy of the production system I do a full backup, copy it into a folder on Server A and connect the folder as network drive an Server B. Then I copy the file into a folder where SQL Server has read permissions and import the backup into the database on B.
On Server A I have made some maintenance plans for backing up the production system into one of the other systems by executing the plan.
I want some solution like this for my B Server, but it doesn't seem to get running. I can do a Select from server A in a table on server B, so they both do know each other and can see them.
I've tried the copy database wizard but it crashes because it couldn't delete the database on server B even if I had removed it manually before the try.
I also tried importing the Database on Server B from Server A but it didn't work either.
Googling my problem didn't work out because I always get solutions to the problem making backup files onto some kind of server.
Hope someone of you could help automating this process.

Restore from SQL Server database and replicate to another database

We have a SQL Server database that is hosted by a 3rd party. We have access to the backup files of that SQL Server database which we grab daily and restore to an "in-house" SQL Server for the purpose of creating reports and looking at the data without the chance of affecting the "live" data on the hosted server.
At the moment our restore process only allows us to access day old data, but we would like to increase the frequency of backup/restore processes so our "sandbox" database is more up to date, like maybe 2 or 3 hours old. One issue is that during the restore process everyone has to be out of the database so restoring every couple of hours might be cumbersome.
Is it possible to setup a 3rd SQL Server for the "sandbox" database to replicate to so that when we restore the data to it, that data will then replicate to the "user" database.
Is there a better way to get the data from the actual production server?

Edit SQL Server backup file to change database and file paths to allow restoring multiple times to same Amazon RDS instance as different databases

Goal: Backup and Restore a SQL Server database multiple times onto an Amazon RDS SQL Server instance with different database and file names.
So Amazon RDS added the ability to access SQL Server database backups and "import" and "export", yay! But you can't change the database name or the file names, boo!
For non-production databases, I want to put them on a single RDS instance, e.g. dev, test, integration, etc. since I don't need much performance and it would save a lot of money.
I have been seeking to come up with a solution for cloning a database onto an Amazon RDS instance, specifying the database name. I don't want to (i.e. not allowed to) spend $6000 for Red Gate SQL Clone. Trying to hack a combination of scripting, bcp, import/export, etc is likely going to take a lot of time.
With the introduction of import/export a database in RDS via SQL backups, I have a new option. The problem is I can't specify database and filenames on "import"(restore).
I thought about writing a script that gets the database backup from RDS, restores it to a local SQL Server Express instance specifying the database name and files that I'll want on the destination, then backup this, then import/restore to Amazon. This is an option but it will take WAY longer than is probably practical.
So... my final thought at this point and my question: is there a reliable way to simply edit/patch the backup file to change the database and file names?
Even if you could afford SQL CLone, I'm not sure it would function on AWS as I believe it requires Windows Hyper-V, which isn't supported on Windows Server VMs on AWS.
Windocks has also just released support for SQL Server cloning, but they also use Hyper-V based approach . . . so if you have options outside of AWS I believe their solution fits your budget . . . but again, not on AWS.
Disclosure: I am the Co-Founder of WinDocks

how to repair master database in sql server 2005

For some unknown reasons, all of a sudden my sql servers master database has been corrupted and sql service won't run. I have spent hours and tried various things like trying to run the service under different accounts, also checked that no compression is set in the data folder etc but nothing seems to work. I copied master database from another instance and the service would start and I can connect to the database instance via management studio but I won't see my databases. I have backup of the corrupted master database (mdf and log file) but just wondering how can we fix this database to see all my databases? thanks
Just restore from the backup (good to have one!), it's the recommended way:
http://blogs.technet.com/b/fort_sql/archive/2011/02/01/the-easiest-way-to-rebuild-the-sql-server-master-database.aspx
In case you don't have a good backup, you will have to rebuild master database:
http://msdn.microsoft.com/en-us/library/ms144259%28v=sql.90%29.aspx
Copying over from another existing instance is not recommended.

deploy the database easily

How do you easily deploy an MS SQL Server database to a third-party hosting the easiest? Nothing comes to mind except generating lots of SQL scripts and running them on the database, or calling LINQ's DataContext.CreateDatabase .
You can deloy the backup of database to the server and restore it
You can deloy the datafiles and transaction log of database to the server and attach it
And, of course, you can generate a lots of sql-queries to generate the database structure and fill it with data.

Resources