Sql Server Backup Sizes - sql-server

Does anyone have real world experience of backup compression in Sql 2008, I'd like to know how it performs compared to Sql 2005 + winrar. Currently when I back up a I get a 14G file. Using rar shrinks it to <400M, a massive saving. Most of the data is similar, being figures from the stock market so I guess that makes it easy to compress.

The figures I have seen suggest compression ratios in the range 3 - 4.5 times smaller, which is not as good as the figures you quoted for rar. See Tuning the Performance of Backup Compression in SQL Server 2008.
On a side note, creating compressed backups is faster.

Time to restore is an important thing for backups. If you compress your backups, you will spent more time to restore it.

I've got one that goes from 200GB down to 50GB, definitely not as compact as WinRar is getting you, but a lot more convenient. As others have noted, backup time is quicker, but restore time is (a bit) longer but still way quicker than SQL2000.

Related

Compression not available on SQL Server Standard? Options?

So as they say, everyday is a school day. Today I learned that my workplace, runs SQL Server Standard edition, where I would have assumed Enterprise was in place. Although in reality shouldn't be surprised!
For some context, we have a very large database that houses our warehouse data. As the database has grown to a large size, it's causing issues with space on the server along with some application performance. So looking at it from my perspective I suggested we archive and purge the PROD database, to house only 18 months data in the PROD environment.
Wrote my scripts and tested them and all fine. I then went to compress the tables I had deleted data from, to find error messages that compression is not available in SQL Server Standard and requires Enterprise edition.
Wondering what my next steps are here? My assumption is that even though I am deleting a lot of data, we won't actually benefit in terms of performance, and space requisition until the tables get compressed.
Shrinking is something I guess I've always shy'd away from, many articles or posts here would advise not to use it.
Wondering, what sort of options do I have here?
Is my assumption correct, in that without compressing, we won't regain space from the trimmed database?
Moving this to resolved as opening query in specific DBA portion of the site

Backup of SQL Server database without timestamps

I'm using the following line to backup a Microsoft SQL Server 2008 database:
BACKUP DATABASE #name TO DISK = #fileName WITH COMPRESSION
Given that database is not changing, repeated execution of this line yields files that are of the same size, but are massively different inside.
How do I create repeated SQL Server backups of the same unchanged database that would give same byte-accurate files? I guess that simple BACKUP DATABASE invocations add some timestamps or some other meta information in the backup media, is there a way to disable or strip this addition?
Alternatively, if it's not possible, is there a relatively simple way to compare 2 backups and see if they'll restore of the exactly same state of the database?
UPDATE: My point for backup comparison is that I'm backing up myriads of databases daily, but most databases don't change that often. It's normal for most of them to change several time per year. So, basically, for all other DBMS (MySQL, PostgreSQL, Mongo), I'm using the following algorithm:
Do a new daily backup
Diff new backup with the most recent of the old backups
If the database wasn't changed (i.e. backups match), delete the new daily backup I've just created
This algorithm works with all DBMSes we've encountered before, but, alas, it fails because of non-repeatable MSSQL backups.
As you guess part of the backup catalog includes the date and time of the backup. The WITH COMPRESSION option compresses the backup to save space but a little change in the file will cause changes throughout the file because of the way compression algorithms work.
If you don't want so many differences then remove the compress option, but comparing backup files isn't the way to go.
If you have a database that changes little then incremental or differential backups may be of more use.
However you seem to have fallen into a classic trap called the XY Problem as you are asking about your attempted solution rather than your actual problem. What is prompting you to try and compare databases?

Backup SQL Server while minimizing bandwidth

I want to implement an automated backup system for my site's SQL Server 2005 DB that will backup nightly to Amazon's S3 service. But since S3 charges both for space and bandwidth used, I would like to minimize the size of the files that I transfer in. What is the best way to achieve this?
I should clarify that I'm not really talking so much about compression, which is pretty straightforward, but concerning backup strategies like whether to do differential backups all the time, whether I need to copy transaction logs, etc.
Differential backups will be smaller than full backups, of course. However, you should consider the restoration side as well. You'll need your last full backup as well as your differentials to perform the restore which can add up to a lot of bandwidth/transfer time for a restore. One option is to perform a full backup weekly and do differentials daily (or a similar type of schedule).
As for transaction logs, it depends on what granularity you're looking for in restoring your data. If restoring to the last full or differential backup is sufficient, then you don't need to worry about taking transaction log backups. If that's not the case, then transaction log backups will be necessary.
Either use a commercial product do compress the backups like Red Gate Backup Pro or just zip-compress it after you're done.
Write a .batch script or powershell script that will find the file/s created in the past day and zip them up. Then FTP or whatever you have to do.
A powershell example that I just came across.

Speed of online backups with BLOBs

In Oracle 8 doing an online backup with BLOBs in the database is extremely slow. By slow, I mean over an hour to backup a database with 100MB of BLOB data. Oracle acknowledged it was slow, but wouldn't fix the problem (so much for paying for support.) Does anyone know if Oracle has fixed this problem with subsequent releases? Also, how fast do online backups work with BLOBs work in SQL Server and MySQL?
I've had this issue in the past, and the only decent workarounds we found were to make sure that the LOBs were in their own tablespace, and use a different backup strategy with them, or to switch to using the BFILE type. Whether or not you can get by with BFILE will depend on how you're using the LOBs.
Some usage info on BFILE:
http://download-uk.oracle.com/docs/cd/B10501_01/java.920/a96654/oralob.htm#1059942
Note that BFILEs live on the filesystem outside of Oracle, so you'd need to back them up in a process outside of your normal Oracle backup. On one project we just had a scheduled rsync to offsite backup. Also important to note is that you cannot create/update BFILEs via JDBC, but you can read them.
To answer your question about the speed of online backups of BLOBs in SQL Server, it's the same speed as backing up regular data for SQL 2000/2005/2008 - it's typically limited by the speed of your storage. I usually get over 100mb/sec on my database backups with BLOBs.
Be wary of using backup compression tools with those, though - if the BLOB is binary-style data that's heavily random, then you'll waste CPU cycles trying to compress the data, and compression can make the backup slower instead of faster.
I use SQL Backup from Redgate for SQL Server -- it is ridiculously fast, even with my BLOB data.
I keep a copy of every file that I do EDI with, so while they aren't huge, they are numerous and BLOBs. I'm well over 100Megs of just these text files.
It's important to note that Redgate's SQL Backup is just a front-end to the standard SQL Backup...it gives you additional management features, basically, but still utilizes the SQL Server backup engine.
Depending on the size of the BLOBs, make sure you're storing them in-line / out of line appropriately.
See http://www.dba-oracle.com/t_table_blob_lob_storage.htm
Can you put the export file you're creating and the Oracle tablespaces on different disks? You I/O throughput may be the constraining factor...?
exp on 8i was slow, but not as much as you describe. I have backed-up gigabytes of blobs in minutes in 10g..(to disk - using expdp)

DB2 Online Database Backup

I have currently 200+ GB database that is using the DB2 built in backup to do a daily backup (and hopefully not restore - lol) But since that backup now takes more than 2.5 hours to complete I am looking into a Third party Backup and Restore utility. The version is 8.2 FP 14 But I will be moving soon to 9.1 and I also have some 9.5 databases to backup and restore. What are the best tools that you have used for this purpose?
Thanks!
One thing that will help is going to DB2 version 9 and turn on compression. The size of the backup will then decrease (by up to 70-80% on table level) which should shorten the backup time. Of course, if your database is continuosly growing you'll soon run into problems again, but then data archiving might be the thing for you.
Before looking at third party tools, which I doubt would help too much, I would consider a few optimizations.
1) Have you used REORG on your tables and indexes? This would compact the information and minimize the amount of pages used;
2) If you can, backup on multiple disks at the same time. This can easily be achieved by running db2 backup db mydb /mnt/disk1 /mnt/disk2 /mnt/disk3 ...
3) DB2 should do a good job at fine tuning itself, but you can always experiment with the WITH num_buffers BUFFERS, BUFFER buffer-size and PARALLELISM n options. But again, usually DB2 does a better job on its own;
4) Consider performing daily incremental backups, and a full backup once on Saturdays or Sundays;
5) UTIL_IMPACT_PRIORITY and UTIL_IMPACT_LIM let you throttle the backup process so that it doesn't affect your regular workload too much. This is useful if your main concern is not the time per se, but rather the performance of your datasever while you backup;
6) DB2 9's data compression can truly do wonders when it comes to reducing the dimensions of the data that needs to be backed up. I have seen very impressive results and would highly recommend it if you can migrate to version 9.1 or, even better, 9.5.
There really are only two ways to make backup, and more important recovery, run faster:
1. backup less data and/or
2. have a bigger pipe to the backup media
I think you got a lot of suggestions on how to reduce the amount of data that you back up. Basically, you should be creating a backup strategy that relies on relatively infrequent full backup and much more frequent backups of changed (since last full backup) data. I encourage you to take a look at the "Configure Automatic Maintenance" wizard in the DB2 Control Center. It will help you with creating automatic backups and with other utilities like REORG that Antonio suggested. Things like compression obviously can help as the amount of data is much lower. However, not all DB2 editions offer compression. For example, DB2 Express-C does not. Frankly, doing compression on a 200GB database may not be worth while anyway and that is precisely why free DBMS like DB2 Express-C don't offer compression.
As far as openign a bigger pipe for your backup you first have to decide if you are going to backup to disk or to tape. There is a big difference in speed (obviously disk is a lot faster). Second, DB2 can paralelize backups. So, if you have multiple devices to back to, it will backup to all of them at the same time i.e. your elapsed time will be a lot less depending how many devices you have to throw at the problem. Again, DB2 Control Center can help you have it set up.
Try High Performance Unload (HPU) - this was a standalone product from Infotel is now available as part of the Optim data studio - posting here https://www.ibm.com/developerworks/mydeveloperworks/blogs/idm/date/200910?lang=en
It's not a "third-party" product but anyone that I have ever seen using DB2 is using Tivoli Storage Manager to store their database backups.
Most shops will set up archive logging to TSM so you only have to take the "big" backup every week or so.
Since it's also an IBM product you won't have to worry about it working with all the different flavors of DB2 that you have.
The downside is it's an IBM product. :) Not sure if that ($) makes a difference to you.
I doubt that you can speed things up using another backup tool. As Mike mentions, you can add TSM to the stack, but that will hardly make the backup run any faster.
If I were you, I'd look into where backup files are stored: Are they using the same disk spindles as the database itself? If so: See if you can store the backup files on a storage area which isn't contented for access during your backup window.
And consider using incremental backups for daily backups, and then a long full backup on Saturdays.
(I assume that you are already running online backups, so that your data aren't unavailable during backup.)
A third party backup package probably won't help your speed much. Making sure that you are not doing full backups every 2 hours is probably the first step.
After that, look at where you are writing your backup to. Is it a local drive, instead of a network drive? Are the spindles used for anything else? Backups don't involve a lot of seek activity, but do involve a lot of big writes, so you probably want to avoid RAID 5 and go for large stripe sizes to help maximize throughput.
Naturally, you have to do full backups sooner or later, but hopefully you can find a window when load is light and you can live with a longer time period between backups. Do your full backup during a 4-6 hour period when the normal incrementals are off and then do incrementals based off of that the rest of the time.
Also, until you get your backup copied to a completely separate system you really aren't backed up. You'll have to experiment to figure out if you're better off compressing it before, during or after sending.

Resources