Backup of SQL Server database without timestamps - sql-server

I'm using the following line to backup a Microsoft SQL Server 2008 database:
BACKUP DATABASE #name TO DISK = #fileName WITH COMPRESSION
Given that database is not changing, repeated execution of this line yields files that are of the same size, but are massively different inside.
How do I create repeated SQL Server backups of the same unchanged database that would give same byte-accurate files? I guess that simple BACKUP DATABASE invocations add some timestamps or some other meta information in the backup media, is there a way to disable or strip this addition?
Alternatively, if it's not possible, is there a relatively simple way to compare 2 backups and see if they'll restore of the exactly same state of the database?
UPDATE: My point for backup comparison is that I'm backing up myriads of databases daily, but most databases don't change that often. It's normal for most of them to change several time per year. So, basically, for all other DBMS (MySQL, PostgreSQL, Mongo), I'm using the following algorithm:
Do a new daily backup
Diff new backup with the most recent of the old backups
If the database wasn't changed (i.e. backups match), delete the new daily backup I've just created
This algorithm works with all DBMSes we've encountered before, but, alas, it fails because of non-repeatable MSSQL backups.

As you guess part of the backup catalog includes the date and time of the backup. The WITH COMPRESSION option compresses the backup to save space but a little change in the file will cause changes throughout the file because of the way compression algorithms work.
If you don't want so many differences then remove the compress option, but comparing backup files isn't the way to go.
If you have a database that changes little then incremental or differential backups may be of more use.
However you seem to have fallen into a classic trap called the XY Problem as you are asking about your attempted solution rather than your actual problem. What is prompting you to try and compare databases?

Related

Comparing a database with its backup file

I have a requirement to compare a database with the same database's backup file and restore the database from the backup file only if they are different.
Use case: I have a test server and I want to restore the database on it using a backup file on a remote file system iff they both differ. I was thinking about comparing the hashes(but read somewhere that there is a limit on the size). Any insights as to how this can be achieved? Also, I want to know how to generate a hash from the database and compare it with the backup file's hash.
You can compare the current database LSN with the backup LSN.
Note that, due to checkpoints, your database and the original database will quickly diverge, even if the data is not touched.
The closest thing to what you probably want is log shipping. It allows you to minimize the traffic between your original DB and the test DB, but it requires your test DB to be read-only, and to establish and maintain a correct log backup sequence on prod.
The short answer to what you ask is: is not possible.
I have a requirement to compare a database with the same database's backup file and restore the database from the backup file only if they are different.
There's no way to compare a backup to a live database without doing a restore to a different location then comparing the schema definition and all the data.
You can use replication to keep the prod and test versions in sync. It's already built in.
how to generate a hash from the database and compare it with the backup file's hash
The backup file and the db files are different formats and the hash will never match even if they actually represent the same database state.

Backup and restore SQL Server database without FILESTREAM filegroup

I use SQL Server and have a huge database with two filegroups:
Primary: Which contains all the data except the large files (1MB+)
FILESTREAM (read/write): Which contains the large files
Now, the backup scenario is:
Each Friday get a full backup (2 A.M).
Each day on week except Friday get a differential backup (2 A.M)
Since the database is large, and it is in production on a remote server, whenever I want to bring the database to my local environment to create a test database (weekly), I have to bring both the primary and the filestream.
I would like to be able to change the way the backups and restores are done, in such a way that only had to bring the primary filegroup, ignoring the filestream. By this way, every week I would only bring the primary filegroup, and not all information that suppose the filestream.
I think there can be a lot of problems, and all filestream references can be lost when accessing the files. I would like to know if is it possible to modify the content of all filestream columns when performing a backup, or use a different filestream hosted in the test environment. Also, I've heard about Piecemeal Restore of only some Filegroups, but I have many doubts on how to carry it out.
Question 1: can I have this scenario?
Question 2: is it a good idea to have only one Full backup and bring differential backups/transaction logs to test environment?
Question 3: can I have better scenario to backup and restore?
I'm all ears to recommendations. If you have any example case, please show me with a T-SQL query.

What do these Copy Only Backup options mean?

I am currently trying to backup an empty SQL Server 2008 R2 database that I designed for a project that is getting shelved for the time being. I was going through the back up procedure through the SQL Management Studio when I noticed there was an option to make a Copy Only Back Up. I looked it up to see what it was but I didn't fully understand the options I was getting.
http://technet.microsoft.com/en-us/library/ms191495.aspx
I read the entry above as well as other entries and I keep seeing the phrase "independent of the sequence of conventional SQL Server backups."
Can anyone elaborate what this statement means or more about Copy Only Backups in general? I'm not sure if it's the backup I should do in this case? (My first reaction is no)
It's a full dump of a database, where you intent to take that dump and load it into some OTHER sql server instance. e.g. It's a nice way of making a complete copy of a DB without having to take down the db, detach the db, copy the .mdf files, re-attach, etc...
Naturally, since you're not using this "backup" as an actual backup, you don't want it to interfere with your normal backup schedules, hence the copy-only functionality. It's a full backup, but will not reset the backup schedule, so your normal next incremental/snapshot backup will work as usual.
This mechanism is necessary since the built-in hotcopy/migration tools in MSSMS are basically useless and can't handle its own databases in many cases.
Normally when you take a backup, it starts (or continues, depending on the type of backup that you took) what is called a log chain. Let's say that you need a copy of your database and, for whatever reason, you can't use your normally scheduled backups for this purpose. Let's walk through the scenario where you don't use a copy_only backup
Normal full backup
A bunch of differential backups
Another full backup (to make your copy database)
More differential backups
Delete the backup from step 3 (you know... to save space)
Disaster on your actual database that necessitates restore from backup
In this case, you can only restore to the last differential backup made in step 2 because the differential backups made in step 4 depend on the full backup from step 3. Now, if the backup in step 3 were a copy_only backup, you'd be fine because you're not re-establishing a log chain (which is to say that the differential backups in step 4 depend on the full backup from step 1.
If you are creating an archive backup and continuing to back it up on the server is not a concern, then it doesn't matter whether you use it or not. It will be restorable as the database either way.

SQL Server backup/restore vs. detach/attach

I have one database which contains the most recent data, and I want to replicate the database content into some other servers. Due to non-technical reasons, I can not directly use replicate function or sync function to sync to other SQL Server instances.
Now, I have two solutions, and I want to learn the pros and cons for each solution. Thanks!
Solution 1: detach the source database which contains the most recent data, then copy to the destination servers which need the most recent data, and attach database at the destination servers;
Solution 2: make a full backup of source server for the whole database, then copy data to destination servers and take a full recovery at the destination server side.
thanks in advance,
George
The Detach / Attach option is often quicker than performing a backup as it doesn't have to create a new file. Therefore, the time from Server A to Server B is almost purely the file copy time.
The Backup / Restore option allows you to perform a full backup, restore that, then perform a differential backup which means your down time can be reduced between the two.
If it's data replication you're after, does that mean you want the database functional in both locations? In that case, you probably want the backup / restore option as that will leave the current database fully functional.
EDIT: Just to clarify a few points. By downtime I mean that if you're migrating a database from one server to another, you generally will be stopping people using it whilst it's in transit. Therefore, from the "stop" point on Server A up to the "start" point on Server B this could be considered downtime. Otherwise, any actions performed on the database on server A during transit will not be replicated onto server B.
In regards to the "create a new file". If you detach a database you can copy the MDF file immediately. It's already there ready to be copied. However, if you perform a backup, you have to wait for the .BAK file to be created and then move it to it's new location for a restore. Again this all comes down to is this a snapshot copy or a migration.
Backing up and restoring makes much more sense, even if you might eek out a few extra minutes from a detach attach option instead. You have to take the original database offline (disconnect everyone) prior to a detach, and then the db is unavailable until you reattach. You also have to keep track of all of the files, whereas with a backup all of the files are grouped. And with the most recent versions of SQL Server the backups are compressed.
And just to correct something: DB backups and differential backups do not truncate the log, and do not break the log chain.
In addition, the COPY_ONLY functionality only matters for the differential base, not for the LOG. All log backups can be applied in sequence from any backup assuming there was no break in the log chain. There is a slight difference with the archive point, but I can't see where that matters.
Solution 2 would be my choice... Primarily becuase it won't create any downtime on the source database. The only disadvatage i can see is that depending on the database recovery model, the transaction log will be truncated meaning if you wanted to restore any data from the transaction log you'd be stuffed, you'd have to use your backup file.
EDIT: Found a nice link; http://sql-server-performance.com/Community/forums/p/5838/35573.aspx

Backup SQL Server while minimizing bandwidth

I want to implement an automated backup system for my site's SQL Server 2005 DB that will backup nightly to Amazon's S3 service. But since S3 charges both for space and bandwidth used, I would like to minimize the size of the files that I transfer in. What is the best way to achieve this?
I should clarify that I'm not really talking so much about compression, which is pretty straightforward, but concerning backup strategies like whether to do differential backups all the time, whether I need to copy transaction logs, etc.
Differential backups will be smaller than full backups, of course. However, you should consider the restoration side as well. You'll need your last full backup as well as your differentials to perform the restore which can add up to a lot of bandwidth/transfer time for a restore. One option is to perform a full backup weekly and do differentials daily (or a similar type of schedule).
As for transaction logs, it depends on what granularity you're looking for in restoring your data. If restoring to the last full or differential backup is sufficient, then you don't need to worry about taking transaction log backups. If that's not the case, then transaction log backups will be necessary.
Either use a commercial product do compress the backups like Red Gate Backup Pro or just zip-compress it after you're done.
Write a .batch script or powershell script that will find the file/s created in the past day and zip them up. Then FTP or whatever you have to do.
A powershell example that I just came across.

Resources