We would like to be able to nightly make a copy/backup/snapshot of a production database so that we can import it in the dev environment.
We don't want to log ship to the dev environment because it needs to be something we can reset whenever we like to the last taken copy of the production database.
We need to be able to clear certain logging and/or otherwise useless or heavy tables that would just bloat the copy.
We prefer the attach/detach method as opposed to something like sql server publishing wizard because of how much faster an attach is than an import.
I should mention we only have SQL Server Standard, so some features won't be available.
What's the best way to do this?
MSDN
I'd say use those procedures inside a SQL Agent job (use master.xp_cmdshell to perform the copy).
You might want to put the big huge tables on their own partition and have this partition belong to a different file group. You would backup then backup and restore the main file group.
You might want to also consider doing incremental backups. Say, a full backup every weekend and an incremental every night. I haven't done file group backups, so I don't know if these work well together.
I'm guessing that you are already doing regular backups of your production database? If you aren't, stop reading this reply and go set it up right now.
I'd recommend that you write a script that automatically runs, say once a day, that:
Drops your current test database.
Restores your current production backup to your test environment.
You can write a simple script to do this and execute it using the isql.exe command line tool.
Related
we can create a SQLServer deployment script with TFS&SSDT, but is there a way to create a rollback script, so that we can roll back the deployment?
Thank's
As SSDT (and similar products) all work by comparing the schema in the project against a live database to bring the database in sync with the model, there's not a direct way to create a rollback script. There are also considerations regarding data changed/added/removed through pre or post-deploy scripts.
That being said, there are a handful of options.
Take a snapshot each time you do a release. You can use that snapshot from the prior release to do another compare for rollback purposes.
Maintain a prior version elsewhere - perhaps do a schema compare from your production system against your local machine. You can use that to compare against production and do a rollback.
Generate a dacpac of the existing system prior to release (use SQLPackage or SSDT to do that). You can use that to deploy that version of the schema back to the database if something goes wrong.
Take a database snapshot prior to release. Best case scenario, you don't need it and can drop the snapshot. Worst case, you could use that to rollback. Of course, you need to watch out for space and IO as you'll be maintaining that original state elsewhere.
Run your changes through several environments to minimize the need for a rollback. Ideally if you've run this through Development, QA, and Staging/User Acceptance environments, your code and releases should be solid enough to be able to release without any issues.
You'll need to code accordingly for rolling back data changes. That could be a bit trickier as each scenario is different. You'll need to make sure that you write a script that can undo whatever changes were part of your release. If you inserted a row, you'll need a rollback script to delete it. If you updated a bunch of data, you'll either need a backup of that data or some other way to get it back.
Before making any changes to a database project I take a snapshot (a dacpac) which I can compare the modified database project against to generate a release script. Although it's easy enough to swap the source and target to do a reverse schema compare I've discovered it won't let me generate an update script (which would be the rollback script) from the reverse comparison, presumably because the target is a database project.
To get around that problem and generate a rollback script I do the following:
Deploy the modified database project to my (localdb) development database;
Check out a previous version of the database project from source control, from before the changes were made;
Run a schema compare from the previous version of the database project to the (localdb) development database;
Use the schema compare to generate an update script. This update script will be a rollback script.
Although it would be nice to be able to generate a rollback script more directly the four step process above takes less than five minutes.
We have some large schema changes coming down the pipe and are in needs of some tips in writing upgrade scripts manually. We're using SQL Server 2000 and do not have access to automated tools nor are they an option at this point in time. The only database tool we have is SQL Server Management Studio.
You can import the database to a local machine with has a newer version of SQL, then you can use the 'Generate Scripts' feature to script out a lot of the database objects.
Make sure to set in the Advanced Settings to script for SQL Server 2000.
If you are having problems with the script generated, you can try breaking it up into chunks and run it in small batches. That way if you have any specific generated scripts you can just write the SQL manually to get it to run.
While not quite what you had in mind, you can use Schema comparing tools like SQL Compare, and then just script the changes to a sql file, which you can then edit by hand before running it. I guess that would be as close to writing it manually without writing it manually.
If you -need- to write it all manually i would suggest getting some intellisense-type of tools to speed things up.
Your upgrade strategy is probably going to be somewhat customized for your deployment scenario, but here are a few points that might help.
You're going to want to test early and often (not that you wouldn't do this anyway), so be sure to have a testing DB in your initial schema, with a backup so you can revert back to "start" and test your upgrade any number of times.
Backups & restores can be time-consuming, so it might be helpful to have a DB with no data rows (schema-only) to test your upgrade script. Remember to get a "start" backup so you can go back there on-demand.
Consider stringing a series of scripts together - you can use one per build, or feature, or whatever. This way, once you've got part of the script working, you can leave it alone.
Big data migration can get tricky. If you're doing data transformations, copying or moving rows to new tables, etc., be sure to check row counts before the move and account for all rows afterwards.
Plan for failure. If something goes wrong, have a plan to fix it -- whether that's rolling everything back to a backup taken at the beginning of the deployment, or whatever. Just be sure you've got a plan and you understand where your go / no-go points are.
Good luck!
I want to create a job that runs every night. I have a database (MyDatabase) that I want to copy/replace my staging database with (MyDatabase_Stage).
I presume the easiest way is to do something related to SQL Server Agent, but I have never done anything like this before. What is the best practice and easiest route to go to get this setup and tested?
I do not care if the data is 24 hours old and the most important criteria is that is does a full copy every night at the same time.
copy the .bak file to your staging server and restore from there using a script.
Run the script on a schedule in an agent job.
The benefit of a script is that you can add functionality later - for instance you might not require audit tables and these can be truncated.
Check out snapshot replication. As part of the setup, it'll create a SQL Agent job to do the copy of the data and whatnot. You can then schedule that job at whatever time and frequency you like.
The scenario is. A database secondary server are for different reason out of sync or is suspected that is not sync. Someone has made the secondary databases online by mistake or other mishaps. If you now want to make sure that they are set back on track. How do you do that? Preferably swiftly and for many databases at once.
When you set up a log shipping between two servers using the guide it takes care of the initial backup and copying of backup file and then the initial restore.
If I have to redo that I have to unable/enble and redo the loghipping and fill all the parameters again. Is there an other way? Can I use sqllogship application?
I there a "C:\Program Files\Microsoft SQL Server\100\Tools\Binn\sqllogship.exe" -Restart -server SQLServ\PROD2
Or is there something that could be done easily with powershell and SQL Server Management Objects - SMO?
I want to use all the parameters that are already in tables like log_shipping_secondary.
I have not found any scripts for doing this. I looked at the generated script when I used the guide but that does not contain the inital backup and copy. I can write my own script. I am just afraid someone will say: Why did you not just run: $smoLogShipping.Redo
If you bring a standby database on-line (i.e.) restore it with_recovery then this will break the log-shipping. The only way to re-establish log shipping is to restore the standby database from a full backup of the source again and use no_recovery / standby mode.
I do not know of any community supported script to do what you ask but it can be scripted easy enough. The GUI can handle most of the process, you would then just need to tweak it be parameterized and customized to the work flow that you are after. The link below gives an example of what I'm talking about.
Scripting Log Shipping Automation
I need to do some structural changes to a database (alter tables, add new columns, change some rows etc) but I need to make sure that if something goes wrong i can rollback to initial state:
All needed changes are inside a SQL script file.
I don't have administrative access to database.
I really need to ensure the backup is done on server side since the BD has more than 30 GB of data.
I need to use sqlplus (under a ssh dedicated session over a vpn)
Its not possible to use "flashback database"! It's off and i can't stop the database.
Am i in really deep $#$%?
Any ideas how to backup the database using sqlplus and leaving the backup on db server?
better than exp/imp, you should use rman. it's built specifically for this purpose, it can do hot backup/restore and if you completely screw up, you're still OK.
One 'gotcha' is that you have to backup the $ORACLE_HOME directory too (in my experience) because you need that locally stored information to recover the control files.
a search of rman on google gives some VERY good information on the first page.
An alternate approach might be to create a new schema that contains your modified structures and data and actually test with that. That presumes you have enough space on your DB server to hold all the test data. You really should have a pretty good idea your changes are going to work before dumping them on a production environment.
I wouldn't use sqlplus to do this. Take a look at export/import. The export utility will grab the definitions and data for your database (can be done in read consistent mode). The import utility will read this file and create the database structures from it. However, access to these utilities does require permissions to be granted, particularly if you need to backup the whole database, not just a schema.
That said, it's somewhat troubling that you're expected to perform the tasks of a DBA (alter tables, backup database, etc) without administrative rights. I think I would be at least asking for the help of a DBA to oversee your approach before you start, if not insisting that the DBA (or someone with appropriate privileges and knowledge) actually perform the modifications to the database and help recover if necessary.
Trying to back up 30GB of data through sqlplus is insane, It will take several hours to do and require 3x to 5x as much disk space, and may not be possible to restore without more testing.
You need to use exp and imp. These are command line tools designed to backup and restore the database. They are command line tools, which if you have access to sqlplus via your ssh, you have access to imp/exp. You don't need administrator access to use them. They will dump the database (with al tables, triggers, views, procedures) for the user(s) you have access to.