Restart log shipping when out of sync - sql-server

The scenario is. A database secondary server are for different reason out of sync or is suspected that is not sync. Someone has made the secondary databases online by mistake or other mishaps. If you now want to make sure that they are set back on track. How do you do that? Preferably swiftly and for many databases at once.
When you set up a log shipping between two servers using the guide it takes care of the initial backup and copying of backup file and then the initial restore.
If I have to redo that I have to unable/enble and redo the loghipping and fill all the parameters again. Is there an other way? Can I use sqllogship application?
I there a "C:\Program Files\Microsoft SQL Server\100\Tools\Binn\sqllogship.exe" -Restart -server SQLServ\PROD2
Or is there something that could be done easily with powershell and SQL Server Management Objects - SMO?
I want to use all the parameters that are already in tables like log_shipping_secondary.
I have not found any scripts for doing this. I looked at the generated script when I used the guide but that does not contain the inital backup and copy. I can write my own script. I am just afraid someone will say: Why did you not just run: $smoLogShipping.Redo

If you bring a standby database on-line (i.e.) restore it with_recovery then this will break the log-shipping. The only way to re-establish log shipping is to restore the standby database from a full backup of the source again and use no_recovery / standby mode.

I do not know of any community supported script to do what you ask but it can be scripted easy enough. The GUI can handle most of the process, you would then just need to tweak it be parameterized and customized to the work flow that you are after. The link below gives an example of what I'm talking about.
Scripting Log Shipping Automation

Related

Clarification on proper practices for backing up SQL Server databases?

Recently I found myself needing to back up a client database running on SQL Server 2008 R2.
Normally I would do this by selecting the "Backup" task in SQL Server Management Studio, which produces a single, portable archive of the database. However, a co-worker said that some customers standard backup practice is to simply create a copy of the .MDF and .LOG files instead. Another then spoke up recommending against such methods, stating that the only 'proper' procedure for this is using the Backup task to produce a file as described above, before backing that up instead. In the event of a problem, restoring this backup file and applying transaction logs allows you to restore service without losing data.
I agree with the recommendation of using the provided task in combination with transaction logs, but I wasn't entirely sure of myself so I kept my mouth shut. Let's be kind and assume that the .MDF/.LOG files were copied when SQL Server itself had been shut down cleanly and completely - is there actually a good reason to use the Backup task instead of copying the raw files, or are we mistaken?
I did some reading on MSDN and found that copying the files and restoring them can result in issues in some cases (see Limitations when using Xcopy deployment), but is that the only difference?
Using the backup task will reset the log files whilst simply copying them will not: this helps with database performance.
Always use the backup task: this can also be scheduled.

Replicating / Cloning data from one MS SQL Server to another

I am trying to get the content of one MSSQL database to a second MSSQL database. There is no conflict management required, no schema updating. It is just a plain copy and replace data. The data of the destination database would be overwritten, in case somebody would have had changed something there.
Obviously, there are many ways to do that
SQL Server Replication: Well established, but using old protocols. Besides that, a lot of developers keep telling me that the devil is in the details and the replication might not always work as expected and that is this best choice for an administrator, but not for a developer.
MS Sync Framework: MSF is said to be the cool, new technology. Yes, it is this new stuff, you love to get, because it sounds so innovative. There is the generic approach for synchronisation, this sounds like: Learn one technology and how to integrate data source, you will never have to learn how to develop syncing again. But on the other hand, you can read that the main usage scenario seems to be to synchronize MSSQL Compact databases with MSSQL.
SQL Server Integration Services: This sounds like an emergency plannable solution. In case the firewall is not working, we have a package that can be executed again and again... until the firewall drops down or the authentication is fixed.
Brute Force copy and replace of database files: Probably not the best choice.
Of course, when looking on the Microsoft websites, I read that every technology (apart from brute force of course) is said to be a solid solution that can be applied in many scenarios. But that is, of course, not the stuff I wanted to hear.
So what is your opinion about this? Which technology would you suggest.
Thank you!
Stefan
The easiest mechanism is log shipping. The primary server can put the log backups on any UNC path, and then you can use any file sync tools to manage getting the logs from one server to another. The subscriber just automatically restores any transaction log backups it finds in its local folder. This automatically handles not just data, but schema changes too.
The subscriber will be read-only, but that's exactly what you want - otherwise, if someone can update records on the subscriber, you're going to be in a world of hurt.
I'd add two techniques to your list.
Write T-SQL scripts to INSERT...SELECT the data directly
Create a full backup of the database and restore it onto the new server
If it's a big database and you're not going to be doing this too often, then I'd go for the backup and restore option. It does most of the work for you and is guaranteed to copy all the objects.
I've not heard of anyone using Sync Framework, so I'd be interested to hear if anyone has used it successfully.

Oracle Backup Database with sqlplus it's possible?

I need to do some structural changes to a database (alter tables, add new columns, change some rows etc) but I need to make sure that if something goes wrong i can rollback to initial state:
All needed changes are inside a SQL script file.
I don't have administrative access to database.
I really need to ensure the backup is done on server side since the BD has more than 30 GB of data.
I need to use sqlplus (under a ssh dedicated session over a vpn)
Its not possible to use "flashback database"! It's off and i can't stop the database.
Am i in really deep $#$%?
Any ideas how to backup the database using sqlplus and leaving the backup on db server?
better than exp/imp, you should use rman. it's built specifically for this purpose, it can do hot backup/restore and if you completely screw up, you're still OK.
One 'gotcha' is that you have to backup the $ORACLE_HOME directory too (in my experience) because you need that locally stored information to recover the control files.
a search of rman on google gives some VERY good information on the first page.
An alternate approach might be to create a new schema that contains your modified structures and data and actually test with that. That presumes you have enough space on your DB server to hold all the test data. You really should have a pretty good idea your changes are going to work before dumping them on a production environment.
I wouldn't use sqlplus to do this. Take a look at export/import. The export utility will grab the definitions and data for your database (can be done in read consistent mode). The import utility will read this file and create the database structures from it. However, access to these utilities does require permissions to be granted, particularly if you need to backup the whole database, not just a schema.
That said, it's somewhat troubling that you're expected to perform the tasks of a DBA (alter tables, backup database, etc) without administrative rights. I think I would be at least asking for the help of a DBA to oversee your approach before you start, if not insisting that the DBA (or someone with appropriate privileges and knowledge) actually perform the modifications to the database and help recover if necessary.
Trying to back up 30GB of data through sqlplus is insane, It will take several hours to do and require 3x to 5x as much disk space, and may not be possible to restore without more testing.
You need to use exp and imp. These are command line tools designed to backup and restore the database. They are command line tools, which if you have access to sqlplus via your ssh, you have access to imp/exp. You don't need administrator access to use them. They will dump the database (with al tables, triggers, views, procedures) for the user(s) you have access to.

Warm Standby SQL Server/Web Server

Warm Standby SQL Server/Web Server
This question might fall into the IT category but as the lead developer I must come up with a solution to integration as well as pure software issues.
We just moved our web server off site and now I would like to keep a warm standby of both the website and database (SQL 2005) on site.
What are the best practices for going about doing this with the following environment?
Offsite: our website and database (SQL 2005) are on one Windows 2003 server. There is a firewall in front of the server which makes
replication or database mirroring not an option. A vpn is also not an option.
My thoughts as a developer were to write a service which runs at the remote site to zip up and ftp the database backup to an ftp server
on site. Then another process would unzip the backup and restore it to the warm standby database here.
I assume I could do this with the web site as well.
I would be open to all ideas including third party solutions.
If you want a remote standby you probably want to look into a log shipping solution.
This article may help you out. In the past I had to develop one of these solutions for the exact same problem, writing it from scratch is not too hard. The advantage you get with log shipping is that you have the ability to restore to any point in time and you avoid shipping these big heavy full backups around and instead ship light transaction log backups, and occasionally a big backup.
You have to keep in mind that transaction log backups are useless without having both the entire sequence of transaction log backups and a full backup.
You have exactly the right idea. You could maybe write a script that would insert the names of the files that you moved into a table that your warm server could read. Your job could then just poll this table at intervals and not worry about timing.
Forget about that - just found this. Sounds like what you are setting out to do.
http://www.sqlservercentral.com/articles/Administering/customlogshipping/1201/
I've heard good things about Syncback:
http://www.2brightsparks.com/syncback/sbpro-features.html
Thanks for the link to the article sambo99. Transaction log shipping was my original idea, but I dismissed it because the servers are not in the same domain not even in the same time zone. My only method of moving the files from point A to point B is via FTP. I am going to experiment with just shipping the transaction logs. And see if there is a way to fire off a restore job at given intervals.
www.FolderShare.com is a good tool from Microsoft. You could log ship to a local directory and then synchronize the directory to any other machine. You could also syncrhronize the website folders as well.
"Set it and forget it" type solution. Setup your SQL jobs to clear older files and you'll never have to edit anything after the initial install.
FolderShare (free, in beta) is currently limited to 10,000 files per library.
For all interested the following question also ties into my overall plan to implement log shipping:
SQL Server sp_cmdshell

Nightly importable or attachable copies of production database

We would like to be able to nightly make a copy/backup/snapshot of a production database so that we can import it in the dev environment.
We don't want to log ship to the dev environment because it needs to be something we can reset whenever we like to the last taken copy of the production database.
We need to be able to clear certain logging and/or otherwise useless or heavy tables that would just bloat the copy.
We prefer the attach/detach method as opposed to something like sql server publishing wizard because of how much faster an attach is than an import.
I should mention we only have SQL Server Standard, so some features won't be available.
What's the best way to do this?
MSDN
I'd say use those procedures inside a SQL Agent job (use master.xp_cmdshell to perform the copy).
You might want to put the big huge tables on their own partition and have this partition belong to a different file group. You would backup then backup and restore the main file group.
You might want to also consider doing incremental backups. Say, a full backup every weekend and an incremental every night. I haven't done file group backups, so I don't know if these work well together.
I'm guessing that you are already doing regular backups of your production database? If you aren't, stop reading this reply and go set it up right now.
I'd recommend that you write a script that automatically runs, say once a day, that:
Drops your current test database.
Restores your current production backup to your test environment.
You can write a simple script to do this and execute it using the isql.exe command line tool.

Resources