How to find out if two binary files are exactly the same - md5

I have got a repository where I store all my image files. I know that there are much images which are duplicated and I want to delete each one of duplicated ones.
I thought if I generate checksum for each image file and rename the file to its checksum, I can easily find out if there are duplicated ones by examining the filename. But the problem is that, I cannot be sure about selecting the checksum algorithm to use. For example, if I generate the checksums using MD5, can I exactly trust if the checksums are the same that means the files are exactly the same?

Judging from the response to a similar question in security forum (https://security.stackexchange.com/a/3145), the collision rate is about 1 collision per 2^64 messages. If your files are differenet and your collection is not huge (i.e. close to this number), md5 can be used safely.
Also, see response to a very similar question here: How many random elements before MD5 produces collisions?

The chances of getting the same checksum for 2 different files are extremely slim, but can never be absolutely guaranteed (Pigeonhole principle). An indication of how slim may be that GIT uses the SHA-1 checksum for software development source code including Linux and has never caused any known problems so I would say that you are safe. I would use SHA-1 instead of MD5 because it is slightly better if you are really paranoid.

To make really sure you best follow a two-step-procedure: first calculate a checksum for every file. If the checksums differ you're sure the files are not identical. If you happen to find some files with equal checksums there's no way around doing a bit-by-bit-comparison to make 100% sure if they are really identical. This holds regardless of the hashing-algorithm used.
What you'll get is a massive time-saving as doing bit-by-bit comparison for every possible pair of files will take forever and a day while comparing a hand full of possible candidates is fairly easy.

Related

My program relies on hashes to identify files, some are repeated. How can I work around this?

sorry for the messy title but I can't come up with something that really describes what's happening here. So I'm making a program that fetches .cue files for Playstation 1 roms. To do this, the program creates a SHA-1 hash of the file and checks it in a database. The database can be found in the "psx.hash" file in this repo. This has been working fine but I suddenly stumbled upon a very very nasty problem. There's plenty of files that have the same hash, because they are essentially the same file.
Let me break down the problem a bit. PSX roms are essentially cd files, and they can come in tracks. These tracks usually contain audio, and the .cue file is used to tell the emulator where each audio track is located [in the disc file]. So what I do is to identify each and every track file (based on their SHA-1 hash), see if they match the database, and then construct a link based on their name (minus the track text) to get to the original cue file. Then I read the text and add it to the cue, simple as that. Well, apparently many games use the same track for some reason? Exactly 175 of them
So... what can I do to difentiate them? This leads to the problem that I fetch the wrong cue file whenever this hash comes into play. This is the hash by the way: "d9f92af296360772e62caa4cb276de3fa74f5538". I tried other algorithms to see if it was just an extremely unlikely coincidence, but nope, all gave the same results. SHA-256 gave the same result, CRC gave the same result, MD5 gave the same result (by the same result I mean the same between files, of course the results of different algorithms for the same file will be different).
So I don't know what to do. This is a giant bug in my program that I have no idea on how to fix, any insight is welcome. I'm afraid I explained myself poorly, if so, I apologize, but I have a hard time seeing where I may not be clear enough, so if you have any doubts please, do ask.
It's worth noting that the database was not constructed by myself, but by redump.org, also, here's the code I'm using to retrieve the hashes of the files:
def getSha1(file):
hashSha1 = hashlib.sha1()
with open(file, "rb") as f:
for chunk in iter(lambda: f.read(4096), b""):
hashSha1.update(chunk)
return hashSha1.hexdigest()
The correct solution would be to construct the hash file in such a way that I can differentiate between track files for each game, but I ended up doing the following:
Sort the list of Tracks to have them ordered.
Get the first track file and retrieve the hash (this one will always be unique since it contains the game)
For every next track file that isn't Track 1, assume it belongs to the game before it. So if the next file is Track 2, assume it belongs to the previous file that had Track 1.
This nicely avoids the issue, although it's circumventing the bigger problem of not having properly formatted data.

To find duplicate files on a hard disk by technique other than calculating hash on each file

There is a hard disk with lots of file, how will you find duplicate files among them.
First thing we could do is separate files on the basis of FILE_SIZE.
Then we could find hash value of each file using some algorithm like MD5, one with the same hash would be duplicates.
Can anyone tell about some other approaches to segregate candidates for duplicates files, apart from using FILE_SIZE. maybe using file headers, extensions, or any other idea?
You may want to use multiple levels of comparisons, with the fast ones coming first to avoid running the slower ones more than necessary. Suggestions:
Compare the file lengths.
Then compare the first 1K bytes of the files.
Then compare the last 1K bytes of the files. (First and last parts of a file are more likely to contain signatures, internal checksums, modfication data, etc, that will change.)
Compare the CRC32 checksums of the file. Use CRC rather than a cryptographic hash, unless you have security measures to be concerned about. CRC will be much faster.

What's the fastest way to tell if two MP3 files are duplicates?

I want to write a program that deletes duplicate iTunes music files. One approach to identifying dupes is to compare MD5 digests of the MP3 and m4a files. Is there a more efficient strategy?
BTW the "Display Duplicates" menu command on iTunes shows false positives. Apparently it just compares on the Artist and Track title strings.
If you use hashes to compare two sets of data, ideally they'd have to have exactly the same input each time in order to get exactly the same output (unless you miraculously picked two collisions of different input resulting in the same output). If you wanted to compare two MP3 files by hashing the entire file, then the two sets of song data might be exactly the same but since ID3 is stored inside the file, discrepancies there might make the files appear to be completely different. Since you're using a hash, you won't notice that perhaps 99% of the two files are matches because the outputs will be too different.
If you really want to use a hash to do this, you should only hash the sound data excluding any tags that may be attached to the file. This is not recommended, if music is ripped from CDs for example, and the same CD is ripped two different times, the results might be encoded/compressed differently depending on ripping parameters.
A better (but much more complicated) alternative would be an attempt to compare the uncompressed audio data values. With a little trial and error with known inputs can lead to a decent algo. Doing this perfectly will be very hard (if possible at all), but if you get something that's more than 50% accurate, it'll be better than going through by hand.
Note that even an algorithm that can detect if two songs are close (say the same song ripped under different parameters), the algo would have to be more complex than it's worth to tell if a live version is anything like a studio version. If you can do that, there's money to be made here!
And touching back on the original idea of how fast to tell if they're duplicates. A hash would be a lot faster, but a lot less accurate than any algorithm with this purpose. It's speed vs accuracy and complexity.

md5 collision database?

I'm writing a file system deduper. The first pass generates md5 checksums, and the second pass compares the files with identical checksums.
Is there a collection of strings which differ but generate identical md5 checksums I can incorporate into my test case collection?
Update: mjv's answer points to these two files, perfect for my test case.
http://www.win.tue.nl/~bdeweger/CollidingCertificates/MD5Collision.certificate1.cer
http://www.win.tue.nl/~bdeweger/CollidingCertificates/MD5Collision.certificate2.cer
You can find a couple of different X.509 certificate files with the same MD5 hash at this url.
I do not know of MD5 duplicate files repositories, but you can probably create your own, using the executables and/or the techniques described on Vlastimil Klima's page on MD5 Collision
Indeed MD5 has been know for its weakness with regards to collision resistance, however I wouldn't disqualify it for a project such as your file system de-duper; you may just want to add a couple of additional criteria (which can be very cheap, computationally speaking) to further decrease the possibility of duplicates.
Alternatively, for test purposes, you may simply modify your MD5 compare logic so that it deems some MD5 values identical even though they are not (say if the least significant byte of the MD5 matches, or systematically, every 20 comparisons, or at random ...). This may be less painful than having to manufacture effective MD5 "twins".
http://www.nsrl.nist.gov/ might be what you want.

Are there algorithms for putting a digest into the file being digested?

Are there algorithms for putting a digest into the file being digested?
In otherwords, are there algorithms or libraries, or is it even possible to have a hash/digest of a file contained in the file being hashed/digested. This would be handy for obvious reasons, such as built in digests of ISOs. I've tried googling things like "MD5 injection" and "digest in a file of a file." No luck (probably for good reason.)
Not sure if it is even mathematically possible. Seems you'd be able to roll through the file but then you'd have to brute the last bit (assuming the digest was the last thing in the file or object.)
Thanks,
Chenz
It is possible in a limited sense:
Non-cryptographically-secure hashes
You can do this with insecure hashes like the CRC family of checksums.
Maclean's gzip quine
Caspian Maclean created a gzip quine, which decompresses to itself. Since the Gzip format includes a CRC-32 checksum (see the spec here) of the uncompressed data, and the uncompressed data equals the file itself, this file contains its own hash. So it's possible, but Maclean doesn't specify the algorithm he used to generate it:
It's quite simple in theory, but the helper programs I used were on a hard disk that failed, and I haven't set up a new working linux system to run them on yet. Solving the checksum by hand in particular would be very tedious.
Cox's gzip, tar.gz, and ZIP quines
Russ Cox created 3 more quines in Gzip, tar.gz, and ZIP formats, and wrote up in detail how he created them in an excellent article. The article covers how he embedded the checksum: brute force—
The second obstacle is that zip archives (and gzip files) record a CRC32 checksum of the uncompressed data. Since the uncompressed data is the zip archive, the data being checksummed includes the checksum itself. So we need to find a value x such that writing x into the checksum field causes the file to checksum to x. Recursion strikes back.
The CRC32 checksum computation interprets the entire file as a big number and computes the remainder when you divide that number by a specific constant using a specific kind of division. We could go through the effort of setting up the appropriate equations and solving for x. But frankly, we've already solved one nasty recursive puzzle today, and enough is enough. There are only four billion possibilities for x: we can write a program to try each in turn, until it finds one that works.
He also provides the code that generated the files.
(See also Zip-file that contains nothing but itself?)
Cryptographically-secure digests
With a cryptographically-secure hash function, this shouldn't be possible without either breaking the hash function (particularly, a secure digest should make it "infeasible to generate a message that has a given hash"), or applying brute force.
But these hashes are much longer than 32 bits, precisely in order to deter that sort of attack. So you can write a brute-force algorithm to do this, but unless you're extremely lucky you shouldn't expect it to finish before the universe ends.
MD5 is broken, so it might be easier
The MD5 algorithm is seriously broken, and a chosen-prefix collision attack is already practical (as used in the Flame malware's forged certificate; see http://www.cwi.nl/news/2012/cwi-cryptanalist-discovers-new-cryptographic-attack-variant-in-flame-spy-malware, http://arstechnica.com/security/2012/06/flame-crypto-breakthrough/). I don't know of what you want having actually been done, but there's a good chance it's possible. It's probably an open research question.
For example, this could be done using a chosen-prefix preimage attack, choosing the prefix equal to the desired hash, so that the hash would be embedded in the file. A
preimage attack is more difficult than collision attacks, but there has been some progress towards it. See Does any published research indicate that preimage attacks on MD5 are imminent?.
It might also be possible to find a fixed point for MD5; inserting a digest is essentially the same problem. For discussion, see md5sum a file that contain the sum itself?.
Related questions:
Is there any x for which SHA1(x) equals x?
Is a hash result ever the same as the source value?
The only way to do this is if you define your file format so the hash only applies to the part of the file that doesn't contain the hash.
However, including the hash inside a file (like built into an ISO) defeats the whole security benefit of the hash. You need to get the hash from a different channel and compare it with your file.
No, because that would mean that the hash would have to be a hash of itself, which is not possible.

Resources