The backup and restore process of a large database or collection of databases on sql server is very important for disaster & recovery purposes. However, I have not found a robust solution that will guarantee the whole process is as efficient as possible, 100% reliable and easily maintainable and configurable accross multiple servers.
Microsft's Maintenance Plans doesn't seem to be sufficient. The best solution I have used is one that I created manually using many jobs with many steps per database running on the source server (backup) and destination server (restore). The jobs use stored procedures to do the backup, copying & restoring. This runs once a day (full backup/restore) and intraday every 5 mins (transaction log shipping).
Although my current process works and reports any job failures via email, I know the whole process isn't very reliable and cannot be easily maintained/configured on all our servers by a non-DBA without having in-depth knowledge of the process.
I would like to know if others have this same backup/restore process and how others overcome this issue.
I've used a similar step to keep dev/test/QA databases 'zero-stepped' on a nightly basis for developers and QA folks to use.
Documentation is the key - if you want to remove what Scott Hanselman calls 'bus factor' (i.e. the danger that the creator of the system will get hit by a bus and everything starts to suck).
That said, for normal database backups and disaster recovery plans, I've found that SQL Server Maintenance Plans work out pretty well. As long as you include:
1) Decent documentation
2) Routine testing.
I've outlined some of the ways to go about doing that (for anyone drawn to this question looking for an example of how to go about creating a disaster recovery plan):
SQL Server Backup Best Practices (Free Tutorial/Video)
The key part of your question is the ability for the backup solution to be managed by a non-DBA. Any native SQL Server answer like backup scripts isn't going to meet that need, because backup scripts require T-SQL knowledge.
Because of that, you want to look toward third-party solutions like the ones Mitch Wheat mentioned. I work for Quest (the makers of LiteSpeed) so of course I'm partial to that one - it's easy to show to non-DBAs. Before I left my last company, I had a ten minute session to show the sysadmins and developers how the LiteSpeed console worked, and that was that. They haven't called since.
Another approach is using the same backup software that the rest of your shop uses. TSM, Veritas, Backup Exec and Microsoft DPM all have SQL Server agents that let your Windows admins manage the backup process with varying degrees of ease-of-use. If you really want a non-DBA to manage it, this is probably the most dead-easy way to do it, although you sacrifice a lot of performance that the SQL-specific backup tools give you.
I am doing precisely the same thing and have various issues semi regularly even with this process.
How do you handle the spacing between copying the file from Server A to Server B and restoring the transactional backup on Server B.
Every once in a while the transaction backup is larger than normal and takes a longer time to copy. The restore job then gets an operating system error that the file is in use.
This is not such a big deal since the file is automatically applied the next time around however it would be nicer to have a more elegant solution in general and one that specifically fixes this issue.
Related
I want design and implement an enterprise software with silverlight.I use sql server database for this.many useres run sql queireis on sql server database.
how can i configure sql server database for best performance?
how can i distribute sql server database for best performance?
how can i distribute sql server database between some servers for best performance?
and so what technologies can i use in sql server for best performance?
In addition to replication you can use mirroring or log shipping for this. Note that I am talking only about scaling out reads, not write. So reports etc. can be run from the copies of the database but writes must go to the main copy (unless you are using merge replication, which is frightening to me). There are some caveats of course.
With database mirroring, you can use the secondary as a read-only reporting source by taking a snapshot. There are limits here to how many databases you can mirror and there is of course maintenance to manage the snapshots. It is not quite true distribution of resources here, but it can be helpful to offload some of the load. In the next version of SQL Server (Denali), you will be able to set secondaries as read-only, so you can avoid the maintenance of snapshots.
With log shipping, you can essentially keep a stale version of the database around for reporting, and replace it periodically by restoring logs to it. You have a lot more flexibility here compared to replication or mirroring, as you can actually define a delay (like every 6 hours or once a day, you refresh the copy) - which can also serve as a "recover from a shoot-yourself-in-the-foot" scenario. The downside is that to restore a new copy of the database you need to kick all the current users out, as the database needs to be in single user mode in order to recover.
Those are just a couple of ideas for helping scale out reads, but deep down I agree with #gbn - are you solving a problem you don't have yet? It's one thing to design for scalability, but it's very easy to step over that line and completely over-engineer.
Well, SQL Server doesn't really have a load balancing mchanism in and off itself. What it does support, however, is an active/passive node configuration and also replication.
We are using the replication strategy in one application I support. You can read more about it here:
http://msdn.microsoft.com/en-us/library/ms151198.aspx
In our configuration, we basically have a transactional database and a reporting database. We replicate the data from our transactional DB to the reporting DB. Any reporting is done against this reporting DB, so that we don't slow down work being done on the transactional DB due to some long running report.
Note that the replication isn't truly real time. In other words, there's some time involved in replicating the data from the transactional to the reporting DB, albeit a very small time amount. But replication is certainly one strategy you could consider if you are trying to balance workload.
Other things you might consider are partitioning large tables for better performance.
As gbn pointed out in his comment though, it's better to determine if you actually need these strategies before implementing them, because they add a lot of complexity and maintenance efforts, which may not even be needed. It's important to properly analyze how much data you think you will have, and how much activity will be occurring against that data to determine if strategies such as the ones I just described are even needed.
Also, you can refer to this link for some other helpful information and some links to whitepapers you may find helpful:
http://social.msdn.microsoft.com/Forums/en/sqldisasterrecovery/thread/05cf41b7-c558-44bf-86c6-12f5c2b2ffe2
I've created a software that is supposed to synchronize data between two databases in SQL Server. The program is tested as much as I was able to do so while having a limited amount of data and limited time. Now I need to make it run and I want to play that safe.
What would be the best approach to be able to recover if something goes wrong and database gets corrupted? (meaning not usable by the original program)
I know I can backup both databases each time I perform the sync. I also know that I could do point in time recovery.
Are there any other options? Is it possible to rollback only the changes made by the sync service? (both databases are going to be used by other software)
You probably have, but I suggest investigating the backup and recovery options available in SQL Server. Since you have no spec, you don't know how the system is going to behave against these changes, leaving you with a higher likelihood of problems. For this reason (and many other obvious reasons) I would want to have solid SQL backups/recovery process in place. Unfortunately Express isn't very good in automating this area, but you can run them manually before the sync.
At the very least, make everything transactional; a failure in your program should not leave the databases in a partially sync'd state.
Too bad you don't have a full version of SQL Server... then you might be able to use something like replication services and eliminate this program altogether? ;)
I just wanted to know what you guys think about this.
I have an app written in Visual Basic .Net as my front end and and Oracle 11g Standart database as the back-end. So I have like 20 pc's running this app locally. They're all inserting, updating, deleting data on this single database. I want to develop a solution in the case that the server database crashes or cannot stay on line. So i think of having oracle 10g XE on each pc. Thus all the data will be stored in the local db. I think about running a proccess once in a while (ex. every 15 minutes) to send/get the data to/from the main server. What do you think about this strategy?
Oracle does have a mechanism for sharing data between databases, called Replication. Oracle XE supports Basic Replication (read-only and updateable materialized view site only). Obviously it depends on the specifics of your requirements, but from the little you have given us this might be a viable solution for you. Run each POS off its own Oracle XE database with regular synchronisations to the main (master) database.
Each POS has its data in updatable materialized views. That is, it can read and write its own data to the local XE database. These materialized views are part of a replication group which synchronizes their data with a master table in the main database. Going the other way the main database pushes its product data to read-only materialized views in the POS databases. The value of this architecture is that the POS always connect to their local XE databases, and never connect to the master database. This is a lot cleaner than connecting to the central database most of the time and switching to local databases in an emergency.
Unfortunately the documentation is a bit confusing, because it is called Advanced Replication and doesn't really mention "basic replication" at all. But Basic Replication covers most things - Advanced Replication is mainly Writeable Materialized Views and Multi-Master replication, neither of which you need anyway. I'm not saying Replication is easy, because it does cover some tricksy concepts. But using Oracle's built-in functionality has surely got to be better than rolling your own.
Note that your system would still be extremely exposed to the failure of the main database. Your client may think another Oracle license is a bit pricy (I wouldn't disagree). However, in extreme cases, failure to recover a database can kill a company.
This sounds like an horrendous idea. Duplicating data from one database to another is a complex subject. The process you're describing involves 20 duplications !
To be of any use in the event of a crash, you will also need a two-way replication mechanism: the 20 clients will continue to update their local DB. How do you deal with concurrent updates? The merging process alone with 20 databases will cost so much in resources it would have been cheaper to have a tried and tested professional DR (Disaster Recovery) process.
A true standby database on the other hand would be simpler to deploy, simpler to test, simpler to maintain and will cost less in resources. I suggest you don't reinvent the wheel :)
Edit:
By the way if you just want a backup and recovery plan, duplicating the database is NOT the solution. I suggest you read the online documentation about recovery:
Oracle Database Backup and Recovery Basics
Oracle Database Backup and Recovery Advanced User's Guide
I had the "pleasure" of trying to make exactly this sort of solution more robust on a SQL Server based POS system. As Vincent says, it's a complex process, fraught with unforseen nightmare scenarios and difficult to maintain code (e.g., ugly DOS .bat files I had to write). I would have to agree with him that it's a more robust solution to use a standby scenario.
That said, if your client won't spring for another license (and I do see their point) you seem to be stuck doing exactly this sort of thing. It can be done, but let your client know that the homegrown replication system is going to be a costly one, and will likely take quite some time to get the wrinkles worked out. It also probably won't scale well as the number of retail sites increases.
I have the following scenario:
Our system is running a SQL Server Express 2005 database locally (on each users desktop, if you will). The system is storing a lot of production data from a machine. There are high demands on the safety of the data, and doing a backup each night, or even each hour is not enough. We need a backup strategy that will ensure almost instantaneous/continuous backup of the database.
Is there anyone out there that has successfully implemented a system similar to this, and/or has got some ideas of how to accomplish it? The only thing I can think of right now is to have mirrored drives (raid) to hold the data, but that would be complicated and expensive.
I would appreciate any and all thoughts on this, since it is a real issue for me and my company. Thanks in advance!
Update:
I was not clear enough in my description of the scenario. The system is storing data in a vehicle that has no connection to anything. A centralized database is therefor not possible. Neither can we use a standard/enterprise version of SQL Server, since it would be to expensive (each vehicle would need a license). Thanks for your input!
Switch your database into "Full" recovery mode. Do full backup every night and do delta backup after major user action. The delta backups can be done to the flash memory or different hard-drive, and all data can be synchronized with server when online.
Another simple way is to trace all user changes and important data in a text file that stored on a separate drive. If SQL database crashes the user or other operator can repeat steps to restore data.
One way I've seen this done is by using DoubleTake.
I will assume that a central database on a server is not feasible because your systems are running standalone and are not connected to anything. So this is what I would do
Set up RAID on the computer. This insures you against simple disk failure.
Any SQLSever database can be recovered to the point of the last commited transaction if you have a full database backup and a set of transaction logs available. Basically you simply restore the last full backup then apply the transaction logs going forward. See these links.
http://www.enterpriseitplanet.com/storage/features/article.php/11318_3776361_3
https://web.archive.org/web/1/http://blogs.techrepublic%2ecom%2ecom/datacenter/?p=132
So what you need to do is set up a periodic full backup of both the database and transaction logs, and more regular transaction log backups (and ensure that your transaction log can never run out of space).
In the event of failure you restore the last full backup, then apply the transaction logs going forward.
Myself, if these are critical systems, I would be inclined to add an additional drive to the system and make sure that the backups are copied over to that. This is because as good as raid is it does sometimes have issues - raid controllers fail, disks get wiped accidentally in parallel, disk failures go unnoticed so your just running on one disk etc. If you ensure backups are copied to a separate disk then you can always recover to the last transaction log backup. You should also ensure tape backups of course, but they are generally a last resort in the event of trouble.
If for some reason you cannot set up raid then you should still install a second disk, but place the database file on one drive and the transaction log on the other and copy backups to both disks. In the event of failure of the C drive, or some other software issue crashing the database you can still recover to the last commited transaction. Failure on the D drive limits you to the last transaction log backup (Oracle used to allow you to mirror the transaction log from the database, which again would completely cover you, but I don't think this facility exists in SQL Server)
If you are looking for a scheduler for SQL Server Express (which doesn't come with one) then I've been using SQLScheduler quite happily without problems, and it's free.
The most obvious answer would be to ditch SQL Server Express running locally and use a single source for your data (such as a standard SQL server install on a central storage location). Unless your system requires individual back ups of every single person's own individual instance of SQL Server Express.
If your requirements are so stringent as to call for instantaneous backups on every operation, you should definitely think about a different method of storage than local instances of SQL Server Express.
Wouldn't it be easier to just use one centralized SQL Server and back that up every hour or so? If you truly need instantaneous backup, your company (which seems not to want to spend money by installing Express on each machine) will need to spring for two servers and two SQL Server Enterprise licenses to implement Mirroring.
Raid isn't that expensive, but it is also not the best option. If you really want high availability data you should upgrade to sql server standard on a remote server where each user connects to and use transaction based replication to an sql server (express) instance on another machine. Raid doesn't always protect you from dataloss. If the data is that important for you then the costs should not be that much of an issue.
Update in response to the question update.
If you can't use remote servers then there a couple of options:
You write a trigger which initiates a backup script on each insert or update and stores it on a seperate harddrive.
You use raid. But beware that if the raid controller fails that you still got a problem.
RAID is not expensive. Use RAID to protect against hard drive failure. You also need monitoring though. No point in having this if you let both drives fail.
Also, implement hourly incremental backups, then daily incremental backups and finally weekly full backups.
You need all of these strategies working together because they protect against different things. RAID does not protect against human or coding errors destroying data. Hourly and weekly backups don't protect against hard drive failure.
I am a developer. An architect on good days. Somehow I find myself also being the DBA for my small company. My background is fair in the DB arts but I have never been a full fledged DBA. My question is what do I have to do to ensure a realiable and reasonably functional database environment with as little actual effort as possible?
I am sure that I need to make sure that backups are being performed and that is being done. That is an easy one. What else should I be doing on a consistant basis?
Who else is involved in the database? Are you the only person making schema changes (creating new objects, releasing new stored procedures, permissioning new users)?
Make sure that the number of users doing anything that could impact performance is reduced to as close to zero as possible, ideally including you.
Make sure that you're testing your backups - ideally run a DEV box that is recreating the production environment periodically, 1. a DEV box is a good idea, 2. a backup is only useful if you can restore from it.
Create groups for the various apps that connect to your database, so when a new user comes along you don't guess what permissions they need, just add them to the group, meanwhile permission the database objects to only the groups that need them
Use indices, primary keys, foreign keys, constraints, stats and whatever other tools your database supports. Normalise.
Optimise the most common code against your box - bad stored procedures/data access code will kill you.
I've been there. I used to have a job where I wrote code, did all the infrastructure stuff, wore the DBA hat, did user support, fixed the electric stapler when it jammed, and whatever else came up that might be remotely associated with IT. It was great! I learned a little about everything.
As far as the care and feeding of your database box, I'd recommend that you do the following:
Perform regular full backups.
Perform regular transaction log backups.
Monitor your backup jobs. There's a bunch of utilities out on the market that are relatively cheap that can automate this for you. In a small shop you're often too busy
to remember to check on them daily.
Test your backups. Do a drill. Restore an old copy of your most important databases. Prove to yourself that your backups are working and that you know how to restore them properly. You'd be suprised how many people only think about this during their first real disaster.
Store backups off-site. With all the online backup providers out there today, there's not much excuse for not having an offsite backup.
Limit sa access to your boxes.
If your database platform supports it, use only role based security. Resist the temptation to have one-off user specific security.
The basic idea here is that if you restrict who has access to the box, you'll have fewer problems. Secondly, if your backups are solid, there are few things that come up that you won't be able to deal with effectively.
I would suggest:
A script to quickly restore the latest backup of a database, in case it gets corrupted
What kind of backups are you doing? Full backups each day, or incremental every hour, etc?
Some scripts to create new users and grant them basic access.
However, the number one suggestion is to limit as much as possible the power other users have, this will greatly reduce the chance of stuff getting badly messed up. Servers that have everyone as an sa tend to get screwed up quicker than servers that are locked down.