Combining two GUID/UUIDs with MD5, any reasons this is a bad idea? - md5

I am faced with the need of deriving a single ID from N IDs and at first a i had a complex table in my database with FirstID, SecondID, and a varbinary(MAX) with remaining IDs, and while this technically works its painful, slow, and centralized so i came up with this:
simple version in C#:
Guid idA = Guid.NewGuid();
Guid idB = Guid.NewGuid();
byte[] data = new byte[32];
idA.ToByteArray().CopyTo(data, 0);
idB.ToByteArray().CopyTo(data, 16);
byte[] hash = MD5.Create().ComputeHash(data);
Guid newID = new Guid(hash);
now a proper version will sort the IDs and support more than two, and probably reuse the MD5 object, but this should be faster to understand.
Now security is not a factor in this, none of the IDs are secret, just saying this 'cause everyone i talk to react badly when you say MD5, and MD5 is particularly useful for this as it outputs 128 bits and thus can be converted directly to a new Guid.
now it seems to me that this should be just dandy, while i may increase the odds of a collision of Guids it still seems like i could do this till the sun burns out and be no where near running into a practical issue.
However i have no clue how MD5 is actually implemented and may have overlooked something significant, so my question is this: is there any reason this should cause problems? (assume sub trillion records and ideally the output IDs should be just as global/universal as the other IDs)

My first thought is that you would not be generating a true UUID. You would end up with an arbitrary set of 128-bits. But a UUID is not an arbitrary set of bits. See the 'M' and 'N' callouts in the Wikipedia page. I don't know if this is a concern in practice or not. Perhaps you could manipulate a few bits (the 13th and 17th hex digits) inside your MD5 output to transform the hash outbut to a true UUID, as mentioned in this description of Version 4 UUIDs.
Another issue… MD5 does not do a great job of distributing generated values across the range of possible outputs. In other words, some possible values are more likely to be generated more often than other values. Or as the Wikipedia article puts it, MD5 is not collision resistant.
Nevertheless, as you pointed out, probably the chance of a collision is unrealistic.
I might be tempted to try to increase the entropy by repeating your combined value to create a much longer input to the MD5 function. In your example code, take that 32 octet value and use it repeatedly to create a value 10 or 1,000 times longer (320 octects, 32,000 or whatever).
In other words, if working with hex strings for my own convenience here instead of the octets of your example, given these two UUIDs:
78BC2A6B-4F03-48D0-BB74-051A6A75CCA1
FCF1B8E4-5548-4C43-995A-8DA2555459C8
…instead of feeding this to the MD5 function:
78BC2A6B-4F03-48D0-BB74-051A6A75CCA1FCF1B8E4-5548-4C43-995A-8DA2555459C8
…feed this:
78BC2A6B-4F03-48D0-BB74-051A6A75CCA1FCF1B8E4-5548-4C43-995A-8DA2555459C878BC2A6B-4F03-48D0-BB74-051A6A75CCA1FCF1B8E4-5548-4C43-995A-8DA2555459C878BC2A6B-4F03-48D0-BB74-051A6A75CCA1FCF1B8E4-5548-4C43-995A-8DA2555459C878BC2A6B-4F03-48D0-BB74-051A6A75CCA1FCF1B8E4-5548-4C43-995A-8DA2555459C878BC2A6B-4F03-48D0-BB74-051A6A75CCA1FCF1B8E4-5548-4C43-995A-8DA2555459C878BC2A6B-4F03-48D0-BB74-051A6A75CCA1FCF1B8E4-5548-4C43-995A-8DA2555459C878BC2A6B-4F03-48D0-BB74-051A6A75CCA1FCF1B8E4-5548-4C43-995A-8DA2555459C878BC2A6B-4F03-48D0-BB74-051A6A75CCA1FCF1B8E4-5548-4C43-995A-8DA2555459C878BC2A6B-4F03-48D0-BB74-051A6A75CCA1FCF1B8E4-5548-4C43-995A-8DA2555459C878BC2A6B-4F03-48D0-BB74-051A6A75CCA1FCF1B8E4-5548-4C43-995A-8DA2555459C878BC2A6B-4F03-48D0-BB74-051A6A75CCA1FCF1B8E4-5548-4C43-995A-8DA2555459C878BC2A6B-4F03-48D0-BB74-051A6A75CCA1FCF1B8E4-5548-4C43-995A-8DA2555459C878BC2A6B-4F03-48D0-BB74-051A6A75CCA1FCF1B8E4-5548-4C43-995A-8DA2555459C878BC2A6B-4F03-48D0-BB74-051A6A75CCA1FCF1B8E4-5548-4C43-995A-8DA2555459C878BC2A6B-4F03-48D0-BB74-051A6A75CCA1FCF1B8E4-5548-4C43-995A-8DA2555459C878BC2A6B-4F03-48D0-BB74-051A6A75CCA1FCF1B8E4-5548-4C43-995A-8DA2555459C878BC2A6B-4F03-48D0-BB74-051A6A75CCA1FCF1B8E4-5548-4C43-995A-8DA2555459C878BC2A6B-4F03-48D0-BB74-051A6A75CCA1FCF1B8E4-5548-4C43-995A-8DA2555459C878BC2A6B-4F03-48D0-BB74-051A6A75CCA1FCF1B8E4-5548-4C43-995A-8DA2555459C878BC2A6B-4F03-48D0-BB74-051A6A75CCA1FCF1B8E4-5548-4C43-995A-8DA2555459C878BC2A6B-4F03-48D0-BB74-051A6A75CCA1FCF1B8E4-5548-4C43-995A-8DA2555459C878BC2A6B-4F03-48D0-BB74-051A6A75CCA1FCF1B8E4-5548-4C43-995A-8DA2555459C878BC2A6B-4F03-48D0-BB74-051A6A75CCA1FCF1B8E4-5548-4C43-995A-8DA2555459C878BC2A6B-4F03-48D0-BB74-051A6A75CCA1FCF1B8E4-5548-4C43-995A-8DA2555459C878BC2A6B-4F03-48D0-BB74-051A6A75CCA1FCF1B8E4-5548-4C43-995A-8DA2555459C8
…or something repeated even longer.

Related

Is it possible to get the original UUID1 from a timestamp?

Say, for example, I generate a time based UUID with the following program.
import uuid
uuid = uuid.uuid1()
print uuid
print uuid.time
I get the following:
47702997-155d-11ea-92d3-6030d48747ec
137946228962896279
Can I get back the original UUID, that is 47702997-155d-11ea-92d3-6030d48747ec, if I know the timestamp (137946228962896279)?
I am reading about UUID version 1 and found a few programs that "kind" of tries to reverses it, but, every time, I am getting a different UUID.
The part that is always changing are the timestamp part (last 4 digits of the first block - 47702997) and the clock_sequence (92d3).
If it is possible to get back the original UUID, what would I need?
Any help/direction is greatly appreciated.
I also made a post in Security Stackexchange but later realized that this question should have been posted here.
The more I look into it, I can see that this is not at all possible since the timestamp does not contain information on the clock_sequence unless I am wrong, in which case, please correct me.
A UUIDv1 contains two main pieces: a temporally unique part (aka timestamp) and a spatially unique part. By design, the two are completely independent, so if you throw one of the pieces away, there is no way to recover it later from the other one.
More generally, notice how the entire UUID is 32 hexadecimal digits, or 128 bits of information, but the timestamp is 18 decimal digits, or only about 60 bits of information. Even without knowing much about UUIDs you could guess that some of the bits are redundant or fixed (and a few are fixed, or at least guessable), but over half of them? Not likely, which means this translation is not reversible.

Best way to store redis keys

I am using Redis to store some information and detect changes in that information over time (for example, think users and locations). What is the value to using a longer or shorter keyname? Using a longer key is clearer, but is there much cost for memory or performance to using longer keyname?
Here are examples:
SET L:123456 "<name> <latitude> <longitude> ..."
HSET U:987654321 loc 123456 time <epoch>
or
SET loc:{123456} "<name> <latitude> <longitude> ..."
HSET user:{U987654321} loc 123456 time <epoch>
It all depends on how you are going to use it.
If every byte counts, for example when you have to pay for each kB transferred to a cloud service, you can calculate the costs. The maths is simple; a byte is a byte 'on the wire'. Inside redis, for larger values it is equally simple. For smaller values, Redis does some memory optimization.
In your HSET example, you split out the members, which only makes sense if you need them separated from eachother most of the time. A better approach -might- be: HSET user:data 987654321 '{"loc": "123456", "time": "2014-01-01T13:00:00"}'. Separate keys/members 'cost' a lot more than longer strings, performance wise. You can even put a whole table or dataset in one member if it's only going to be used as one complete semi-static entity.
Speed and Size: There is a notable difference between keys and values.
Keys:
Shorter is generally more memory efficient as well as speed efficient. If you use a redis Sorted Set you can even use 'numbers' as keys (sorted set 'members' plus 'scores'). I say 'numbers' because a score is technically a float64, but to be used as an ID it has to be between -999999999999999 and 999999999999999 including (that's 15 digits), without any fractional part. This can be really helpful, since Redis does fast and scalable O(log(n)) on-the-fly sorting of Sorted Sets (using skiplists, simplified).
Values:
The MsgPack format (uncompressed) takes up the least space, especially if you store the definitions once and the values many. JSON is a bit less memory efficient, but is ofcourse such a common IPC format that it should not be left out. Raw strings, character separated, fixed length (ugh), whatever your desire, it's possible to use. You can always compress your data before storing it in Redis. So far memory efficiency. When it comes to speed, it's less simple. If you want to use Lua server-side scripting (which you should), you can't do anything with compressed data. JSON and MsgPack can be deserialized, but only 'as a whole'. Which is fine in mosts scenarios. Most flexible is storing separate values (for example as members of a HSET), but this comes at a price as well (most of the time: too high a price). You also can combine all these. What we use most: a prefix of two or three delimiter-separated values, followed by a MsgPack payload.
My general advice is: start with using only HSET's and ZSET's, don't split out data that belongs together, use descriptive PascalCased names for your keys between 10-25 chars, use ':' if you need delimiters in your keys (namespaces), serialize as JSON (for simplicity, but code for easy switching to MsgPack), use Lua scripting (even if you don't know Lua, the subset you use in Redis is tiny).
I wouldn't worry about it too much in the startup phase of your project, you can always change it later on and do some A/B comparisons as soon as you have some interpolatable data.
Hope this helps, TW
Now that Redis v3.2 is almost here, you should consider switching to the new geo hashing functionality: http://redis.io/commands/geoadd

Good hash function for cache memory?

I would like to implement a hash function that goes into a cache memory. Initially, I have 20 bits of input and I need to hash this input into 7 bits.
My cache is 128x4.
I have tried different hash functions, but the results were not very good (I get 60% hit rate). I was thinking of using the MD5 algorithm, but maybe something is better. I read an implementation of MD5 online, but I did not get it.
It seems like a perfectly distributed hash could actually be undesirable, here. It offers the possibility of mapping nearby addresses into the same set.
Perhaps what you want to do is hash 17 bits down to 4, and map the three low-order bits straight through so as to guarantee a minimum distance between instances of the same set.

hashing function guaranteed to be unique?

In our app we're going to be handed png images along with a ~200 character byte array. I want to save the image with a filename corresponding to that bytearray, but not the bytearray itself, as i don't want 200 character filenames. So, what i thought was that i would save the bytearray into the database, and then MD5 it to get a short filename. When it comes time to display a particular image, i look up its bytearray, MD5 it, then look for that file.
So far so good. The problem is that potentially two different bytearrays could hash down to the same MD5. Then, one file would effectively overwrite another. Or could they? I guess my questions are
Could two ~200 char bytearrays MD5-hash down to the same string?
If they could, is it a once-per-10-ages-of-the-universe sort of deal or something that could conceivably happen in my app?
Is there a hashing algorithm that will produce a (say) 32 char string that's guaranteed to be unique?
It's logically impossible to get a 32 byte code from a 200 byte source which is unique among all possible 200 byte sources, since you can store more information in 200 bytes than in 32 bytes.
They only exception would be that the information stored in these 200 bytes would also fit into 32 bytes, in which case your source date format would be extremely inefficient and space-wasting.
When hashing (as opposed to encrypting), you're reducing the information space of the data being hashed, so there's always a chance of a collision.
The best you can hope for in a hash function is that all hashes are evenly distributed in the hash space and your hash output is large enough to provide your "once-per-10-ages-of-the-universe sort of deal" as you put it!
So whether a hash is "good enough" for you depends on the consequences of a collision. You could always add a unique id to a checksum/hash to get the best of both worlds.
Why don't you use a unique ID from your database?
The probability of two hashes will likely to collide depends on the hash size. MD5 produces 128-bit hash. So for 2128+1 number of hashes there will be at least one collision.
This number is 2160+1 for SHA1 and 2512+1 for SHA512.
Here this rule applies. The more the output bits the more uniqueness and more computation. So there is a trade off. What you have to do is to choose an optimal one.
Could two ~200 char bytearrays MD5-hash down to the same string?
Considering that there are more 200 byte strings than 32 byte strings (MD5 digests), that is guaranteed to be the case.
All hash functions have that problem, but some are more robust than MD5. Try SHA-1. git is using it for the same purpose.
It may happen that two MD5 hashes collides (are the same). In 1996, a flaw was found in MD5 algorithm, and cryptanalysts advised to switch to SHA-1 hashing algorithm.
So, I will advise you to switch to SHA-1 (40 characters). But do not worry: I doubt that your two pictures will get the same hash. I think you can assume this risk in your application.
As other said before. Hash doesnt give you what you need unless you are fine with risk of collision.
Database is helpful here.
You get unique index for each 200 long string. No collisions here, and you need to set your 200 long names to be indexed, in that way it will use extra memory but it will sort it for you making search very very fast. You get unique id which can be easily used for filenames.
I have'nt worked much on hashing algorithms but as per my understanding there is always a chance of collison in hashing algorithm i.e. two differnce object may be hashed to same hash value but it is guaranteed that every time a object will be hashed to same hash value. There are other techniques that may be used for this , like linear probing.

How to crack a weakened TEA block cipher?

At the moment I am trying to crack the TEA block cipher in C. It is an assignment and the tea cipher has been weakend so that the key is 2 16-bit numbers.
We have been given the code to encode plaintext using the key and to decode the cipher text with the key also.
I have the some plaintext examples:
plaintext(1234,5678) encoded (3e08,fbab)
plaintext(6789,dabc) encoded (6617,72b5)
Update
The encode method takes in plaintext and a key, encode(plaintext,key1). This occurs again with another key to create the encoded message, encode(ciphertext1,key), which then creates the encoded (3e08,fbab) or encoded (6617,72b5).
How would I go about cracking this cipher?
At the moment, I encode the known plaintext with every possible key; the key size being hex value ffffffff. I write this to file.
But now I am stuck and in need of direction.
How could I use the TEA's weakness of equivalent keys to lower the amount of time it would take to crack the cipher? Also, I am going to use a man in the middle attack.
As when I encode with known plaintext and all key 1s it will create all the encrypted text with associated key and store it in a table.
Then I will decrypt with the known ciphertext that is in my assignment with all the possible values of key2. This will leave me with a table of decrypts that has only been decrypted once.
I can then compare the 2 tables together to see if any of encrpts with key1 match the decrypts with key2.
I would like to use the equilenvent weakness as well, if someone could help me with implmenting this in code that would be great. Any ideas?
This is eerily similar to the Double Crypt problem from the IOI '2001 programming contest. The general solution is shown here, it won't give you the code but might point you in the right direction.
Don't write your results to a file -- just compare each ciphertext you produce to the known ciphertext, encoding the known plain text with every possible key until one of them produces the right ciphertext. At that point, you've used the correct key. Verify that by encrypting the second known plaintext with the same key to check that it produces the correct output as well.
Edit: the encoding occurring twice is of little consequence. You still get something like this:
for (test_key=0; test_key<max; test_key++)
if (encrypt(plaintext, test_key) == ciphertext)
std::cout << "Key = " << test_key << "\n";
The encryption occurring twice means your encrypt would look something like:
return TEA_encrypt(TEA_encrypt(plaintext, key), key);
Edit2: okay, based on the edited question, you apparently have to do the weakened TEA twice, each with its own 16-bit key. You could do that with a single loop like above, and split up the test_key into two independent 16-bit keys, or you could do a nested loop, something like:
for (test_key1=0; test_key1<0xffff; test_key1++)
for (test_key2=0; test_key2<0xffff; test_key2++)
if (encrypt(encrypt(plaintext, test_key1), test_key2) == ciphertext)
// we found the keys.
I am not sure if this property holds for 16-bit keys, but 128-bit keys have the property that four keys are equivalent, reducing your search space by four-fold. I do not off the top of my head remember how to find equivalent keys, only that the key space is not as large as it appears. This means that it's susceptible to a related-key attack.
You tagged this as homework, so I am not sure if there are other requirements here, like not using brute force, which it appears that you are attempting to do. If you were to go for a brute force attack, you would probably need to know what the plaintext should look like (like knowing it English, for example).
The equivalent keys are easy enough to understand and cut key space by a factor of four. The key is split into four parts. Each cycle of TEA has two rounds. The first uses the first two parts of the key while the second uses the 3rd and 4th parts. Here is a diagram of a single cycle (two rounds) of TEA:
(unregistered users are not allowed to include images so here's a link)
https://en.wikipedia.org/wiki/File:TEA_InfoBox_Diagram.png
Note: green boxes are addition red circles are XOR
TEA operates on blocks which it splits into two halves. During each round, one half of the block is shifted by 4,0 or -5 bits to the left, has a part of the key or the round constant added to it and then the XOR of the resulting values is added to the other half of the block. Flipping the most significant bit of either key segment flips the same bit in the sums it is used for and by extension the XOR result but has no other effect. Flipping the most significant bit of both key segments used in a round flips the same bit in the XOR product twice leaving it unchanged. Flipping those two bits together doesn't change the block cipher result making the flipped key equivalent to the original. This can be done for both the (first/second) and (third/fourth) key segments reducing the effective number of keys by a factor of four.
Given the (modest) size of your encryption key, you can afford to create a pre-calculated table (use the same code given above, and store data in large chuncks of memory - if you don have enough RAM, dump the chuncks to disk and keep an addressing scheme so you can lookup them in a proper order).
Doing this will let you cover the whole domain and finding a solution will then be done in real-time (one single table lookup).
The same trick (key truncation) was used (not a long time ago) in leading Office software. They now use non-random data to generate the encryption keys -which (at best) leads to the same result. In practice, the ability to know encryption keys before they are generated (because the so-called random generator is predictable) is even more desirable than key-truncation (it leads to the same result -but without the hurdle of having to build and store rainbow tables).
This is called the march of progress...

Resources