Database backup size conundrum - sql-server

I have a small SQL Server 2005 database. I take daily backup (automated) and the size of the .bak file typically comes out to be 400MB, growing by 5MB every day (which is inline with its usages).
Last night the size of the backup file jumped to 1GB. Suspecting that someone was trying to fill the database with garbage data, I ran a report (Reports -> Standard Reports -> Disk User By Top Tables) and the total size came out to be around 400MB.
Then thinking maybe something was wrong with the automated backup process, I immediately took backup again but the .bak file came out to be over 1GB. Before this automated backup yesterday, an automated task that defragments indexes also ran. However, all these months, a backup after this index optimization used to actually reduce the size of the .bak file.
I am trying to find an explanation for this big jump in size overnight and also why the .bak file is more than double the size of the the actual database disk usages?
UPDATE: I ran
DBCC SHRINKDATABASE(mydb)
to remove transaction logs. Then took a backup again. The size of the .bak file came out even bigger than last time.
This is the query I ran:
DBCC SHRINKFILE(mydb_log, 1)
BACKUP LOG mydbWITH TRUNCATE_ONLY
DBCC SHRINKFILE(mydb_log, 1)

1 - How are you backing up the data? Maintenance plans?
2 - If you are appending to the same database backup file, the backup will grow!
Check out the contents of the file.
RESTORE FILELISTONLY FROM AdventureWorksBackups WITH FILE=1;
http://technet.microsoft.com/en-us/library/ms173778.aspx
This assumes you have a dump device named AdventureWorksBackups. You can also change it to a DISK='AdventureWorks.bak'.
3 - Also, the maintenance plans do not do a good job with determining when to re-organize/update stats versus rebuild an index.
4 - Check out the ola hallengren scripts. They are way better!
http://ola.hallengren.com/
First, they create a directory structure for you.
c:\backup\<server name>\<database name>\full
c:\backup\<server name>\<database name>\diff
c:\backup\<server name>\<database name>\log
Second, each backup has a date time stamp. No appending to backup files.
Third, they clean up after themselves by passing the number of hours to keep on-line.
Fourth, they handle index fragmentation better - 5-30 = re-organize or 30+ = rebuild.
I usually set up the following for my databases.
1 - system databases - full backup every night, log backups hourly.
2 - user databases - full backup 1 x week, diff backup x 6 days, log backups hourly
Last but not least, SQL Server 2005 does not have native support for compressed backups. This does not mean you can not run a batch file to zip them up afterwards.
Third party tools like QUEST (DELL) and RED GATE used to support their own backup utility. The main reason was to fill this gap. Since SQL Server 2008, compressed backups were available. I think many of the vendors are getting rid of this utility since it is now standard.

Related

Correct process for reducing SQL Log file size

SQL 2008 R2. Full back-up every night with replication to a 2nd server for reporting. Business critical multiple databases. SQL Log files in danger of exceeding available disk space
Shrinkfile did not seem effective. Created backup of log file and then chose GUI option to shrink log file. Having done this log file size is much more manageable. When is it safe to delete the Log file backup, or is it never?
You need to keep the transaction log backups for the full backups that you will potentially use to recover the database. E.g. if you keep the last 7 full backups, you just need to keep the log backups for the last week so you can recover the db to any point of time in the week if something is wrong.

SSISDB file size is not reduced even after deleting half of the history, why?

We are using SQLServer 2014 and many SSIS packages are executed as part of daily/weekly jobs for about an year.
Today I reduced the retention period from 100 days to 50 days and ran the SSIS Server Maintenance job (runs daily).
But the mdf (44GB) and ldf (6GB) file sizes are not reduced.
Surprisngly, if you see the attachment, ldf is not modified since 28-Aug-2017.
SSISDB Backup is taken as part of Maintenance Plan everyday.
Iam sure, we don't have 50GB data in SSISDB, definitely less than that.
Why the SSISDB.mdf and SSISDB.ldf file sizes are not reduced even after deleting half of the history ?
How to release unused space in SSISDB (log or mdf or both) ?
Can any one help ?
SSISDB is just another Database.
In order for Database files to be reduced in size then it is also necessary to use DBCC SHRINKDATABASE.

Disaster Recovery - Restoring SQL Server database without MDF

Given the following (hypothetical) scenario, how would one best backup/restore the database.
Daily doing full backups # 12 am.
Hourly doing differentials 1 am, 2am etc
Transaction log backups on the half hours, 130am, 230am etc
I am also storing the active .ldf file on drive X and the .mdf on drive Y.
Also important the master db is on Y.
Lets say hypothetically the Y drive fails at 245am.
I have the full, diffs and transaction logs up until 230am. BUT I also have the .ldf.
In theory I would have to probably reinstall SQL Server. Then I would want to recover that database up until 245am.
I have heard of doing a tail-log backup on a restore operation BUT I don't have the .mdf anymore. So, I would need to create a new database from my full/diff/log backups. After that I'm not sure how to proceed to get that last 15 minutes of transactions.
I hope this is making sense...
Thanks!
Steve.
You are asking,how to take TailLog Backup when you don't have access to MDF files..
This works only if your database is not in BulkLoggedRecovery model or your log doesn't have Bulk logged transactions..This has been covered in depth here: Disaster recovery 101: backing up the tail of the log
Here are the steps in order
Create a dummy database with same names
Delete all files of this dummy database,by bringing it offline
Copy the original database LDF
Bring this database online which will fail..
Now you can take TailLog Backup using below command..
BACKUP LOG dummydb
TO DISK = N'D:\SQLskills\DemoBackups\DBMaint_Log_Tail.bck' WITH INIT, NO_TRUNCATE;
GO
Now since you have all the backups,you can restore to point in time of Failure

Freeing space on SQL Server Drive

I have ran into an error today in regards to unavailable space on the drive where my SQL tables and logs are stored which has caused me to be unable to update any of the databases.
I have looked at the database server and deleted a database of approximatedly 1.5GB to allow me to continue. On looking on the server drive I can see backups for the database that I have deleted in the location:
E:\Program Files\Microsoft SQL Server\MCSSQL12.SQL2014\MSSQL\Backup
Inside this folder there are 5 backup copies for the last 5 days which I would like to delete to clear up the space. However when I delete the .BAK files from this folder it does not reallocate the free space. Am I missing a step here somewhere?
Quick wins:
If this is on a SAN, get the drive expanded by 20% just to give you some breathing room while you sort everything else.
Dropping databases does not delete their backups. You have to delete those too (and make sure they don't just land in the Recycle Bin). But not until you've verified that you either A) really don't need them anymore or B) have copies of those backups somewhere more durable.
Get your backups moved to a different drive - both now and for your scheduled backups. Your backups should not be on the same drive as your data. File this under "putting all of your eggs into one basket". Consider this: If your E drive fails, it's going to take both your data files and your backups with it. What do you have left?
Review your backup retention. Do you really need all 5 of those backups? On my instances, we do daily Full backups and transaction log backups every 15 minutes. We keep at most 2 Fulls on local storage - everything else is on tape. And once you have a Full, the transaction logs between that Full and the previous one aren't really needed unless you need to a Point In Time Restore to somewhere in between them. All managed by Agent jobs executing Ola Hallengren's backup scripts so we're hands-off and just monitoring.
Are you using backup compression? If not, this may help you get more backups on your volume if you can't change your retention period or backup location (but really, put your backups on another volume, in the interest of safety).
Less-quick wins:
Consider moving your logs to a separate drive. Transaction logs and data files have different I/O profiles/requirements, so by moving them to a different drive you can tune each appropriately - for both performance and space requirements.
Review your indexes and eliminate any that you don't really need (or consolidate them). Indexes take up space! Dropping unused indexes won't give free space back to the volume, but it will free space in your MDF files that can be used for other data instead of growing right away.
Do not shrink your database unless absolutely necessary and you're confident that it won't re-grow significantly in the very near future.
What are your data and log file growth increments? If you have free space on the drives where the log and data files live, and your are allowing for automatic file growth, sql server will grow the files when space is needed.
You may also need to review the information here MSDN regarding dbcc command shrinkfile.

Backup of SQL Server database without timestamps

I'm using the following line to backup a Microsoft SQL Server 2008 database:
BACKUP DATABASE #name TO DISK = #fileName WITH COMPRESSION
Given that database is not changing, repeated execution of this line yields files that are of the same size, but are massively different inside.
How do I create repeated SQL Server backups of the same unchanged database that would give same byte-accurate files? I guess that simple BACKUP DATABASE invocations add some timestamps or some other meta information in the backup media, is there a way to disable or strip this addition?
Alternatively, if it's not possible, is there a relatively simple way to compare 2 backups and see if they'll restore of the exactly same state of the database?
UPDATE: My point for backup comparison is that I'm backing up myriads of databases daily, but most databases don't change that often. It's normal for most of them to change several time per year. So, basically, for all other DBMS (MySQL, PostgreSQL, Mongo), I'm using the following algorithm:
Do a new daily backup
Diff new backup with the most recent of the old backups
If the database wasn't changed (i.e. backups match), delete the new daily backup I've just created
This algorithm works with all DBMSes we've encountered before, but, alas, it fails because of non-repeatable MSSQL backups.
As you guess part of the backup catalog includes the date and time of the backup. The WITH COMPRESSION option compresses the backup to save space but a little change in the file will cause changes throughout the file because of the way compression algorithms work.
If you don't want so many differences then remove the compress option, but comparing backup files isn't the way to go.
If you have a database that changes little then incremental or differential backups may be of more use.
However you seem to have fallen into a classic trap called the XY Problem as you are asking about your attempted solution rather than your actual problem. What is prompting you to try and compare databases?

Resources