Warm Standby SQL Server/Web Server
This question might fall into the IT category but as the lead developer I must come up with a solution to integration as well as pure software issues.
We just moved our web server off site and now I would like to keep a warm standby of both the website and database (SQL 2005) on site.
What are the best practices for going about doing this with the following environment?
Offsite: our website and database (SQL 2005) are on one Windows 2003 server. There is a firewall in front of the server which makes
replication or database mirroring not an option. A vpn is also not an option.
My thoughts as a developer were to write a service which runs at the remote site to zip up and ftp the database backup to an ftp server
on site. Then another process would unzip the backup and restore it to the warm standby database here.
I assume I could do this with the web site as well.
I would be open to all ideas including third party solutions.
If you want a remote standby you probably want to look into a log shipping solution.
This article may help you out. In the past I had to develop one of these solutions for the exact same problem, writing it from scratch is not too hard. The advantage you get with log shipping is that you have the ability to restore to any point in time and you avoid shipping these big heavy full backups around and instead ship light transaction log backups, and occasionally a big backup.
You have to keep in mind that transaction log backups are useless without having both the entire sequence of transaction log backups and a full backup.
You have exactly the right idea. You could maybe write a script that would insert the names of the files that you moved into a table that your warm server could read. Your job could then just poll this table at intervals and not worry about timing.
Forget about that - just found this. Sounds like what you are setting out to do.
http://www.sqlservercentral.com/articles/Administering/customlogshipping/1201/
I've heard good things about Syncback:
http://www.2brightsparks.com/syncback/sbpro-features.html
Thanks for the link to the article sambo99. Transaction log shipping was my original idea, but I dismissed it because the servers are not in the same domain not even in the same time zone. My only method of moving the files from point A to point B is via FTP. I am going to experiment with just shipping the transaction logs. And see if there is a way to fire off a restore job at given intervals.
www.FolderShare.com is a good tool from Microsoft. You could log ship to a local directory and then synchronize the directory to any other machine. You could also syncrhronize the website folders as well.
"Set it and forget it" type solution. Setup your SQL jobs to clear older files and you'll never have to edit anything after the initial install.
FolderShare (free, in beta) is currently limited to 10,000 files per library.
For all interested the following question also ties into my overall plan to implement log shipping:
SQL Server sp_cmdshell
Related
I have a production Server say ServerA I have setup log shipping to ServerB which is left in read-only mode. The purpose of this log shipping is to lower the load on production server for some expensive queries (painful reports).
Now if I have to create some logins using our domain accounts. I cannot do this because the secondary database is in standby mode.
I thought if I create these logins on Primary server it will be copied over to secondary server then the logs are restored there but this isnt the case.
I have done a lot of research online finding a way around to this. I found the following resources for this. I tried every method suggested in this articles but none of them seems to work.
1) Log Shipping in SQL Server 2008 R2 for set BI on replicated database
2) How to transfer logins and passwords between instances of SQL Server
3) Orphaned Users with Database Mirroring and Log Shipping
Has someone experienced the same issue? what did you do? Is there any way around for this issue? Any suggestions any pointer please.
Ali,
Of course I am crafty ...
Check out these articles.
http://technet.microsoft.com/en-us/magazine/2006.05.sqlqa.aspx
http://blogs.msdn.com/b/reedme/archive/2009/04/24/log-shipping-database-snapshots-bummer-dude.aspx
Database mirroring is a better solution since you can create a snapshot and report off that.
However, both mirroring and log shipping have the database in read only state. Therefore, you can not change the orphaned users.
The best way is to make sure your login's on both servers match. Therefore, orphans will not occur.
I your case, you might have to remove log shipping, create the login's on the DR server, drop the database, reseed the DR server with a backup and restart shipping.
In this area, I am not speaking from experience since I always used clustering with a SAN.
Please test this out in a lower environment to work on any gotchas.
My upcoming project will be using Always On (with 1 primary, 1 secondary) = mirroring if synchronous or log shipping if asynchronous. But Always On allows for read only secondaries, which is nice.
Please write back on how you make out. I am curious.
Take care my friend.
J
I've seen a few posts on this, but I just want to make sure I'm not missing something.
I'm seriously considering moving from Azure to App Harbor, but I'm a bit dismayed that there doesn't seem to be a way to maintain daily SQL Server database backups.
I understand that App Harbor maintains daily file system snapshots. This is great for recovering from a catastrophic failure, but doesn't do much to deal with recovering from user errors. For example, if I accidentally delete a chunk of rows, I may want to restore a database from a few days ago to help recover.
I know about these tools for transferring data to/from App Harbor:
- "Generate Scripts" tool in SQL Management Studio
- Bulk copy tool: https://github.com/appharbor/AppHarbor-SqlServerBulkCopy
Those are fine for doing a one-off backup or restore, but I'm looking to figure out some way to back up data automatically, and ideally save it off to AWS S3 storage. Is there a tool or service out there that could possibly do this?
Thank you!
I've created a simple console app that does a daily backup of tables in a SQL Server database. The output is then zipped and uploaded to Amazon S3 storage. This app can be deployed as an AppHarbor background worker. No local SQL server required!
See notes in the readme file for instructions and limitations. This does what we need for now, but I'm happy to let others work on the project if you'd like to extend it.
https://bitbucket.org/ManicBlowfish/ah-dbbackup
What are my options for achieving a cold backup server for SQL Server Express instance running a single database?
I have an SQL Server 2008 Express instance in production that currently represents a single point of failure for my application. I have a second physical box sitting at the installation that is currently doing nothing. I want to somehow replicate my database in near real time (a little bit of data loss is acceptable) to the second box. The database is very small and resources are utilized very lightly.
In the case that the production server dies, I would manually reconfigure my application to point to the backup server instead.
Although Express doesn't support log shipping, I am thinking that I could manually script a poor man's version of it, where I use batch files to take the logs and copy them across the network and apply them to the second server at 5 minute intervals.
Does anyone have any advice on whether this is technically achievable, or if there is a better way to do what I am trying to do?
Note that I want to avoid having to pay for the full version of SQL Server and configure mirroring as I think it is an overkill for this application. I understand that other DB platforms may present suitable options (eg. a MySQL Cluster), but for the purposes of this discussion, let's assume we have to stick to SQL Server.
I would also advise for a script based log shipping. After all, this is how log shipping started. All you need is a time based agent to schedule the scripts (ie. Tasks Scheduler), and a smart(er) file copy (robocopy).
I have the following scenario:
Our system is running a SQL Server Express 2005 database locally (on each users desktop, if you will). The system is storing a lot of production data from a machine. There are high demands on the safety of the data, and doing a backup each night, or even each hour is not enough. We need a backup strategy that will ensure almost instantaneous/continuous backup of the database.
Is there anyone out there that has successfully implemented a system similar to this, and/or has got some ideas of how to accomplish it? The only thing I can think of right now is to have mirrored drives (raid) to hold the data, but that would be complicated and expensive.
I would appreciate any and all thoughts on this, since it is a real issue for me and my company. Thanks in advance!
Update:
I was not clear enough in my description of the scenario. The system is storing data in a vehicle that has no connection to anything. A centralized database is therefor not possible. Neither can we use a standard/enterprise version of SQL Server, since it would be to expensive (each vehicle would need a license). Thanks for your input!
Switch your database into "Full" recovery mode. Do full backup every night and do delta backup after major user action. The delta backups can be done to the flash memory or different hard-drive, and all data can be synchronized with server when online.
Another simple way is to trace all user changes and important data in a text file that stored on a separate drive. If SQL database crashes the user or other operator can repeat steps to restore data.
One way I've seen this done is by using DoubleTake.
I will assume that a central database on a server is not feasible because your systems are running standalone and are not connected to anything. So this is what I would do
Set up RAID on the computer. This insures you against simple disk failure.
Any SQLSever database can be recovered to the point of the last commited transaction if you have a full database backup and a set of transaction logs available. Basically you simply restore the last full backup then apply the transaction logs going forward. See these links.
http://www.enterpriseitplanet.com/storage/features/article.php/11318_3776361_3
https://web.archive.org/web/1/http://blogs.techrepublic%2ecom%2ecom/datacenter/?p=132
So what you need to do is set up a periodic full backup of both the database and transaction logs, and more regular transaction log backups (and ensure that your transaction log can never run out of space).
In the event of failure you restore the last full backup, then apply the transaction logs going forward.
Myself, if these are critical systems, I would be inclined to add an additional drive to the system and make sure that the backups are copied over to that. This is because as good as raid is it does sometimes have issues - raid controllers fail, disks get wiped accidentally in parallel, disk failures go unnoticed so your just running on one disk etc. If you ensure backups are copied to a separate disk then you can always recover to the last transaction log backup. You should also ensure tape backups of course, but they are generally a last resort in the event of trouble.
If for some reason you cannot set up raid then you should still install a second disk, but place the database file on one drive and the transaction log on the other and copy backups to both disks. In the event of failure of the C drive, or some other software issue crashing the database you can still recover to the last commited transaction. Failure on the D drive limits you to the last transaction log backup (Oracle used to allow you to mirror the transaction log from the database, which again would completely cover you, but I don't think this facility exists in SQL Server)
If you are looking for a scheduler for SQL Server Express (which doesn't come with one) then I've been using SQLScheduler quite happily without problems, and it's free.
The most obvious answer would be to ditch SQL Server Express running locally and use a single source for your data (such as a standard SQL server install on a central storage location). Unless your system requires individual back ups of every single person's own individual instance of SQL Server Express.
If your requirements are so stringent as to call for instantaneous backups on every operation, you should definitely think about a different method of storage than local instances of SQL Server Express.
Wouldn't it be easier to just use one centralized SQL Server and back that up every hour or so? If you truly need instantaneous backup, your company (which seems not to want to spend money by installing Express on each machine) will need to spring for two servers and two SQL Server Enterprise licenses to implement Mirroring.
Raid isn't that expensive, but it is also not the best option. If you really want high availability data you should upgrade to sql server standard on a remote server where each user connects to and use transaction based replication to an sql server (express) instance on another machine. Raid doesn't always protect you from dataloss. If the data is that important for you then the costs should not be that much of an issue.
Update in response to the question update.
If you can't use remote servers then there a couple of options:
You write a trigger which initiates a backup script on each insert or update and stores it on a seperate harddrive.
You use raid. But beware that if the raid controller fails that you still got a problem.
RAID is not expensive. Use RAID to protect against hard drive failure. You also need monitoring though. No point in having this if you let both drives fail.
Also, implement hourly incremental backups, then daily incremental backups and finally weekly full backups.
You need all of these strategies working together because they protect against different things. RAID does not protect against human or coding errors destroying data. Hourly and weekly backups don't protect against hard drive failure.
The backup and restore process of a large database or collection of databases on sql server is very important for disaster & recovery purposes. However, I have not found a robust solution that will guarantee the whole process is as efficient as possible, 100% reliable and easily maintainable and configurable accross multiple servers.
Microsft's Maintenance Plans doesn't seem to be sufficient. The best solution I have used is one that I created manually using many jobs with many steps per database running on the source server (backup) and destination server (restore). The jobs use stored procedures to do the backup, copying & restoring. This runs once a day (full backup/restore) and intraday every 5 mins (transaction log shipping).
Although my current process works and reports any job failures via email, I know the whole process isn't very reliable and cannot be easily maintained/configured on all our servers by a non-DBA without having in-depth knowledge of the process.
I would like to know if others have this same backup/restore process and how others overcome this issue.
I've used a similar step to keep dev/test/QA databases 'zero-stepped' on a nightly basis for developers and QA folks to use.
Documentation is the key - if you want to remove what Scott Hanselman calls 'bus factor' (i.e. the danger that the creator of the system will get hit by a bus and everything starts to suck).
That said, for normal database backups and disaster recovery plans, I've found that SQL Server Maintenance Plans work out pretty well. As long as you include:
1) Decent documentation
2) Routine testing.
I've outlined some of the ways to go about doing that (for anyone drawn to this question looking for an example of how to go about creating a disaster recovery plan):
SQL Server Backup Best Practices (Free Tutorial/Video)
The key part of your question is the ability for the backup solution to be managed by a non-DBA. Any native SQL Server answer like backup scripts isn't going to meet that need, because backup scripts require T-SQL knowledge.
Because of that, you want to look toward third-party solutions like the ones Mitch Wheat mentioned. I work for Quest (the makers of LiteSpeed) so of course I'm partial to that one - it's easy to show to non-DBAs. Before I left my last company, I had a ten minute session to show the sysadmins and developers how the LiteSpeed console worked, and that was that. They haven't called since.
Another approach is using the same backup software that the rest of your shop uses. TSM, Veritas, Backup Exec and Microsoft DPM all have SQL Server agents that let your Windows admins manage the backup process with varying degrees of ease-of-use. If you really want a non-DBA to manage it, this is probably the most dead-easy way to do it, although you sacrifice a lot of performance that the SQL-specific backup tools give you.
I am doing precisely the same thing and have various issues semi regularly even with this process.
How do you handle the spacing between copying the file from Server A to Server B and restoring the transactional backup on Server B.
Every once in a while the transaction backup is larger than normal and takes a longer time to copy. The restore job then gets an operating system error that the file is in use.
This is not such a big deal since the file is automatically applied the next time around however it would be nicer to have a more elegant solution in general and one that specifically fixes this issue.