How to crack a weakened TEA block cipher? - c

At the moment I am trying to crack the TEA block cipher in C. It is an assignment and the tea cipher has been weakend so that the key is 2 16-bit numbers.
We have been given the code to encode plaintext using the key and to decode the cipher text with the key also.
I have the some plaintext examples:
plaintext(1234,5678) encoded (3e08,fbab)
plaintext(6789,dabc) encoded (6617,72b5)
Update
The encode method takes in plaintext and a key, encode(plaintext,key1). This occurs again with another key to create the encoded message, encode(ciphertext1,key), which then creates the encoded (3e08,fbab) or encoded (6617,72b5).
How would I go about cracking this cipher?
At the moment, I encode the known plaintext with every possible key; the key size being hex value ffffffff. I write this to file.
But now I am stuck and in need of direction.
How could I use the TEA's weakness of equivalent keys to lower the amount of time it would take to crack the cipher? Also, I am going to use a man in the middle attack.
As when I encode with known plaintext and all key 1s it will create all the encrypted text with associated key and store it in a table.
Then I will decrypt with the known ciphertext that is in my assignment with all the possible values of key2. This will leave me with a table of decrypts that has only been decrypted once.
I can then compare the 2 tables together to see if any of encrpts with key1 match the decrypts with key2.
I would like to use the equilenvent weakness as well, if someone could help me with implmenting this in code that would be great. Any ideas?

This is eerily similar to the Double Crypt problem from the IOI '2001 programming contest. The general solution is shown here, it won't give you the code but might point you in the right direction.

Don't write your results to a file -- just compare each ciphertext you produce to the known ciphertext, encoding the known plain text with every possible key until one of them produces the right ciphertext. At that point, you've used the correct key. Verify that by encrypting the second known plaintext with the same key to check that it produces the correct output as well.
Edit: the encoding occurring twice is of little consequence. You still get something like this:
for (test_key=0; test_key<max; test_key++)
if (encrypt(plaintext, test_key) == ciphertext)
std::cout << "Key = " << test_key << "\n";
The encryption occurring twice means your encrypt would look something like:
return TEA_encrypt(TEA_encrypt(plaintext, key), key);
Edit2: okay, based on the edited question, you apparently have to do the weakened TEA twice, each with its own 16-bit key. You could do that with a single loop like above, and split up the test_key into two independent 16-bit keys, or you could do a nested loop, something like:
for (test_key1=0; test_key1<0xffff; test_key1++)
for (test_key2=0; test_key2<0xffff; test_key2++)
if (encrypt(encrypt(plaintext, test_key1), test_key2) == ciphertext)
// we found the keys.

I am not sure if this property holds for 16-bit keys, but 128-bit keys have the property that four keys are equivalent, reducing your search space by four-fold. I do not off the top of my head remember how to find equivalent keys, only that the key space is not as large as it appears. This means that it's susceptible to a related-key attack.
You tagged this as homework, so I am not sure if there are other requirements here, like not using brute force, which it appears that you are attempting to do. If you were to go for a brute force attack, you would probably need to know what the plaintext should look like (like knowing it English, for example).

The equivalent keys are easy enough to understand and cut key space by a factor of four. The key is split into four parts. Each cycle of TEA has two rounds. The first uses the first two parts of the key while the second uses the 3rd and 4th parts. Here is a diagram of a single cycle (two rounds) of TEA:
(unregistered users are not allowed to include images so here's a link)
https://en.wikipedia.org/wiki/File:TEA_InfoBox_Diagram.png
Note: green boxes are addition red circles are XOR
TEA operates on blocks which it splits into two halves. During each round, one half of the block is shifted by 4,0 or -5 bits to the left, has a part of the key or the round constant added to it and then the XOR of the resulting values is added to the other half of the block. Flipping the most significant bit of either key segment flips the same bit in the sums it is used for and by extension the XOR result but has no other effect. Flipping the most significant bit of both key segments used in a round flips the same bit in the XOR product twice leaving it unchanged. Flipping those two bits together doesn't change the block cipher result making the flipped key equivalent to the original. This can be done for both the (first/second) and (third/fourth) key segments reducing the effective number of keys by a factor of four.

Given the (modest) size of your encryption key, you can afford to create a pre-calculated table (use the same code given above, and store data in large chuncks of memory - if you don have enough RAM, dump the chuncks to disk and keep an addressing scheme so you can lookup them in a proper order).
Doing this will let you cover the whole domain and finding a solution will then be done in real-time (one single table lookup).
The same trick (key truncation) was used (not a long time ago) in leading Office software. They now use non-random data to generate the encryption keys -which (at best) leads to the same result. In practice, the ability to know encryption keys before they are generated (because the so-called random generator is predictable) is even more desirable than key-truncation (it leads to the same result -but without the hurdle of having to build and store rainbow tables).
This is called the march of progress...

Related

How can we turn hash sha256 of a passphase into an EC_key private key?

i have a question, i just practice C & OpenSSL recently & notice this is the common way to create EC_Key:
EC_KEY *eckey = EC_KEY_new();
EC_GROUP *ecgroup= EC_GROUP_new_by_curve_name(NID_secp192k1);
int set_group_status = EC_KEY_set_group(eckey,ecgroup);
int gen_status = EC_KEY_generate_key(eckey);
This method generate EC_key based on a random interger. May i ask if is there any code that we can declare a hash sha256 of a passphase & make it private key of a EC_key we just create since i read that EC_key's private key has the same format with hash sha256?
//Example
char* exam = "somewhere over the rainbow";
unsigned char output[32];
SHA256(exam, strlen(exam), output);
Not directly for that curve.
An ECC private key is actually a random integer less than the order of the base point, or equivalently the order of the group generated by the base point.
Although it is not true for all ECC curves (groups), the X9/Certicom/NIST prime curves were generated so that the generated group order is equal to the curve order (formally, cofactor = 1), and the curve order is always close to the underlying field order which for these curves was chosen very close to 2N.
Thus a private key for a 256-bit prime curve, like P-256/secp256r1 (commonly used in TLS, and SSH, and some other applications) or secp256k1 (used in Bitcoin and some derivative coins), is almost a random 256-bit string -- close enough that in practice it will work.
Similarly for secp192k1 a random 192-bit string is close enough, and could be generated by taking the first 192 bits of a SHA-256 output (or last, or middle, if you prefer) as long as it was computed on input (your passphrase) having sufficient entropy to provide the desired security.
If by passphrase you mean a phrase chosen by a person, no. There is abundant evidence that people do not choose randomly even when they try to, and passwords and passphrases chosen by people, and not 'strengthened' cryptographically which your method does not, are regularly broken. As an example, this was tried in the Bitcoin community a few years ago under the name 'brain wallet' -- i.e. your private key, giving access to your bitcoins, is in your brain. Many of these keys were broken and the bitcoins stolen.
If you mean a series of words (not really a meaningful phrase) generated randomly by the computer to have sufficient entropy, or by some other process that actually is random like rolling fair dice, then yes. The current standard in Bitcoin for a 'seed phrase' is 12 words from a list of 2048 giving 128 bits of entropy plus 4 bits of redundancy; for your curve you only need 96 bits of entropy so 9 such words would work (although it isn't standard). Numerous other similar schemes have been developed and used over the years. In practice you will probably have to write this 'phrase' down and/or store it somewhere, and then secure that storage appropriately.

Ceaser Cipher crack using C language

I am writing a program to decrypt text using ceaser cypher algorithm.
till now my code is working fine and gets all possible decrypted results but I have to show just the correct one, how can I do this?
below is the code to get all decrypted strings.
for my code answer should be "3 hello world".
void main(void)
{
char input[] = "gourz#roohk";
for(int key = 1;x<26;key++)
{
printf("%i",input[I]-x%26);
for(int i = strlen(input)-1;i>=0;i--)
{
printf("%c",input[I]-x%26);
}
}
}
Recall that a Caesar Cipher has only 25 possible shifts. Also, for text of non-trivial length, it's highly likely that only one shift will make the input make sense. One possible approach, then, is to see if the result of the shift makes sense; if it does, then it's probably the correct shift (e.g. compare words against a dictionary to see if they're "real" words; not sure if you've done web services yet, but there are free dictionary APIs available).
Consider the following text: 3 uryyb jbeyq. Some possible shifts of this:
3 gdkkn vnqkc (12)
3 xubbe mehbt (3)
3 hello world (13)
3 jgnnq yqtnf (15)
Etc.
As you can see, only the shift of 13 makes this text contain "real" words, so the correct shift is probably 13.
Another possible solution (albeit more complicated) is through frequency analysis (i.e. see if the resulting text has the same - or similar - statistical characteristics as English). For example, in English the most frequent letter is "e," so the correct shift will likely have "e" as the most frequent letter. By way of example, the first paragraph of this answer contains 48 instances of the letter "e", but if you shift it by 15 letters, it only has 8:
Gtrpaa iwpi p Rpthpg Rxewtg wph dcan 25 edhhxqat hwxuih. Pahd, udg
itmi du cdc-igxkxpa atcviw, xi'h wxvwan axztan iwpi dcan dct hwxui
lxaa bpzt iwt xceji bpzt htcht. Dct edhhxqat peegdprw, iwtc, xh id htt
xu iwt gthjai du iwt hwxui bpzth htcht; xu xi sdth, iwtc xi'h egdqpqan
iwt rdggtri hwxui (t.v. rdbepgt ldgsh pvpxchi p sxrixdcpgn id htt xu
iwtn'gt "gtpa" ldgsh; cdi hjgt xu ndj'kt sdct ltq htgkxrth nti, qji
iwtgt pgt ugtt sxrixdcpgn PEXh pkpxapqat).
The key word here is "likely" - it's not at all statistically certain (especially for shorter texts) and it's possible to write text that's resistant to that technique to some degree (e.g. through deliberate misspellings, lipograms, etc.). Note that I actually have an example of an exception above - "3 xubbe mehbt" has more instances of the letter "e" than "3 hello world" even though the second one is clearly the correct shift - so you probably want to apply several statistical tests to increase your confidence (especially for shorter texts).
Hello to make an attack on caesar cipher more speed way is the frequency analysis attack where you count the frequency of each letter in your text and how many times it appeared and compare this letter to the most appearing letters in English in this link
( https://www3.nd.edu/~busiforc/handouts/cryptography/letterfrequencies.html )
then by applying this table to the letters you can git the text or use this link its a code on get hub (https://github.com/tombusby/understanding-cryptography-exercises/blob/master/Chapter-01/ex1.2.py)
in python for letter frequency last resort answer is the brute force because its more complexity compared to the frequency analysis
brute force here is 26! which means by getting a letter the space of search of letters decrease by one
if you want to use your code you can make a file for the most popular strings in english and every time you decrypt you search in this file but this is high cost of time to do so letter frequency is more better

Combining two GUID/UUIDs with MD5, any reasons this is a bad idea?

I am faced with the need of deriving a single ID from N IDs and at first a i had a complex table in my database with FirstID, SecondID, and a varbinary(MAX) with remaining IDs, and while this technically works its painful, slow, and centralized so i came up with this:
simple version in C#:
Guid idA = Guid.NewGuid();
Guid idB = Guid.NewGuid();
byte[] data = new byte[32];
idA.ToByteArray().CopyTo(data, 0);
idB.ToByteArray().CopyTo(data, 16);
byte[] hash = MD5.Create().ComputeHash(data);
Guid newID = new Guid(hash);
now a proper version will sort the IDs and support more than two, and probably reuse the MD5 object, but this should be faster to understand.
Now security is not a factor in this, none of the IDs are secret, just saying this 'cause everyone i talk to react badly when you say MD5, and MD5 is particularly useful for this as it outputs 128 bits and thus can be converted directly to a new Guid.
now it seems to me that this should be just dandy, while i may increase the odds of a collision of Guids it still seems like i could do this till the sun burns out and be no where near running into a practical issue.
However i have no clue how MD5 is actually implemented and may have overlooked something significant, so my question is this: is there any reason this should cause problems? (assume sub trillion records and ideally the output IDs should be just as global/universal as the other IDs)
My first thought is that you would not be generating a true UUID. You would end up with an arbitrary set of 128-bits. But a UUID is not an arbitrary set of bits. See the 'M' and 'N' callouts in the Wikipedia page. I don't know if this is a concern in practice or not. Perhaps you could manipulate a few bits (the 13th and 17th hex digits) inside your MD5 output to transform the hash outbut to a true UUID, as mentioned in this description of Version 4 UUIDs.
Another issue… MD5 does not do a great job of distributing generated values across the range of possible outputs. In other words, some possible values are more likely to be generated more often than other values. Or as the Wikipedia article puts it, MD5 is not collision resistant.
Nevertheless, as you pointed out, probably the chance of a collision is unrealistic.
I might be tempted to try to increase the entropy by repeating your combined value to create a much longer input to the MD5 function. In your example code, take that 32 octet value and use it repeatedly to create a value 10 or 1,000 times longer (320 octects, 32,000 or whatever).
In other words, if working with hex strings for my own convenience here instead of the octets of your example, given these two UUIDs:
78BC2A6B-4F03-48D0-BB74-051A6A75CCA1
FCF1B8E4-5548-4C43-995A-8DA2555459C8
…instead of feeding this to the MD5 function:
78BC2A6B-4F03-48D0-BB74-051A6A75CCA1FCF1B8E4-5548-4C43-995A-8DA2555459C8
…feed this:
78BC2A6B-4F03-48D0-BB74-051A6A75CCA1FCF1B8E4-5548-4C43-995A-8DA2555459C878BC2A6B-4F03-48D0-BB74-051A6A75CCA1FCF1B8E4-5548-4C43-995A-8DA2555459C878BC2A6B-4F03-48D0-BB74-051A6A75CCA1FCF1B8E4-5548-4C43-995A-8DA2555459C878BC2A6B-4F03-48D0-BB74-051A6A75CCA1FCF1B8E4-5548-4C43-995A-8DA2555459C878BC2A6B-4F03-48D0-BB74-051A6A75CCA1FCF1B8E4-5548-4C43-995A-8DA2555459C878BC2A6B-4F03-48D0-BB74-051A6A75CCA1FCF1B8E4-5548-4C43-995A-8DA2555459C878BC2A6B-4F03-48D0-BB74-051A6A75CCA1FCF1B8E4-5548-4C43-995A-8DA2555459C878BC2A6B-4F03-48D0-BB74-051A6A75CCA1FCF1B8E4-5548-4C43-995A-8DA2555459C878BC2A6B-4F03-48D0-BB74-051A6A75CCA1FCF1B8E4-5548-4C43-995A-8DA2555459C878BC2A6B-4F03-48D0-BB74-051A6A75CCA1FCF1B8E4-5548-4C43-995A-8DA2555459C878BC2A6B-4F03-48D0-BB74-051A6A75CCA1FCF1B8E4-5548-4C43-995A-8DA2555459C878BC2A6B-4F03-48D0-BB74-051A6A75CCA1FCF1B8E4-5548-4C43-995A-8DA2555459C878BC2A6B-4F03-48D0-BB74-051A6A75CCA1FCF1B8E4-5548-4C43-995A-8DA2555459C878BC2A6B-4F03-48D0-BB74-051A6A75CCA1FCF1B8E4-5548-4C43-995A-8DA2555459C878BC2A6B-4F03-48D0-BB74-051A6A75CCA1FCF1B8E4-5548-4C43-995A-8DA2555459C878BC2A6B-4F03-48D0-BB74-051A6A75CCA1FCF1B8E4-5548-4C43-995A-8DA2555459C878BC2A6B-4F03-48D0-BB74-051A6A75CCA1FCF1B8E4-5548-4C43-995A-8DA2555459C878BC2A6B-4F03-48D0-BB74-051A6A75CCA1FCF1B8E4-5548-4C43-995A-8DA2555459C878BC2A6B-4F03-48D0-BB74-051A6A75CCA1FCF1B8E4-5548-4C43-995A-8DA2555459C878BC2A6B-4F03-48D0-BB74-051A6A75CCA1FCF1B8E4-5548-4C43-995A-8DA2555459C878BC2A6B-4F03-48D0-BB74-051A6A75CCA1FCF1B8E4-5548-4C43-995A-8DA2555459C878BC2A6B-4F03-48D0-BB74-051A6A75CCA1FCF1B8E4-5548-4C43-995A-8DA2555459C878BC2A6B-4F03-48D0-BB74-051A6A75CCA1FCF1B8E4-5548-4C43-995A-8DA2555459C878BC2A6B-4F03-48D0-BB74-051A6A75CCA1FCF1B8E4-5548-4C43-995A-8DA2555459C878BC2A6B-4F03-48D0-BB74-051A6A75CCA1FCF1B8E4-5548-4C43-995A-8DA2555459C8
…or something repeated even longer.

Are standard hash functions like MD5 or SHA1 quaranteed to be unique for small input (4 bytes)?

Scenario:
I'm writing web service, that will act like identity provider for 3pty application. I have to send to this 3pty application some unique identifier of our user. In our database, unique user identifier is integer (4 bytes, 32 bites). Per our security rules I can't send those in plain form - so sending them out hashed (trough function like MD5 or SHA1) was my first idea.
Problem:
The result of MD5 is 16 bytes, result of SHA1 is 40 bytes, I know they can't be unique for larger input sets, but given the fact my input set is only 4 bytes long (smaller then hashed results) - are they guaranteed to be unique, or am I doomed to some poor-man hash function (like xoring the integer input with some number, shifting bites, adding predefined bites, etc.) ?
For what you're trying to achieve (preventing a 3rd party from determining your user identifier), a straight MD5 or SHA1 hash is insufficient. 32 bits = about 4 billion values, it would take less than 2 hours for the 3rd party to brute force every value (#1m hashes/sec). I'd really suggest using HMAC-SHA1 instead.
As for collisions, this question has an extremely good answer on their likelihood. tl;dr For 32-bits of input, a collision is excessively small.
If your user identifiers aren't random (they increment by 1 or there is a known algorithm for creating them), then there's no reason you can't generate every hash to make sure that no collision will occur.
This will check the first 10,000,000 integers for a collision with HMAC-SHA1 (will take about 2 minutes to run):
public static bool checkCollisionHmacSha1(byte[] key){
HMACSHA1 mac = new HMACSHA1(key);
HashSet<byte[]> values = new HashSet<byte[]>();
bool collision = false;
for(int i = 0; i < 10000000 && collision == false; i++){
byte[] value = BitConverter.GetBytes(i);
collision = !values.Add(mac.ComputeHash(value));
if (collision)
break;
}
return collision;
}
First, SHA1 is 20 bytes not 40 bytes.
Second, although input is very small, there still may be a collision. It is best to test this, but I do not know a feasible way to do that.
In order to prevent any potential collision:
1 - Hash your input and produce the 16/20 bytes of hash
2 - Spray your actual integer onto this hash.
Like put a byte of your int every 4/5 bytes.
This will guarantee the uniqueness by using the input itself.
Also, take a look at Collision Column part

hashing function guaranteed to be unique?

In our app we're going to be handed png images along with a ~200 character byte array. I want to save the image with a filename corresponding to that bytearray, but not the bytearray itself, as i don't want 200 character filenames. So, what i thought was that i would save the bytearray into the database, and then MD5 it to get a short filename. When it comes time to display a particular image, i look up its bytearray, MD5 it, then look for that file.
So far so good. The problem is that potentially two different bytearrays could hash down to the same MD5. Then, one file would effectively overwrite another. Or could they? I guess my questions are
Could two ~200 char bytearrays MD5-hash down to the same string?
If they could, is it a once-per-10-ages-of-the-universe sort of deal or something that could conceivably happen in my app?
Is there a hashing algorithm that will produce a (say) 32 char string that's guaranteed to be unique?
It's logically impossible to get a 32 byte code from a 200 byte source which is unique among all possible 200 byte sources, since you can store more information in 200 bytes than in 32 bytes.
They only exception would be that the information stored in these 200 bytes would also fit into 32 bytes, in which case your source date format would be extremely inefficient and space-wasting.
When hashing (as opposed to encrypting), you're reducing the information space of the data being hashed, so there's always a chance of a collision.
The best you can hope for in a hash function is that all hashes are evenly distributed in the hash space and your hash output is large enough to provide your "once-per-10-ages-of-the-universe sort of deal" as you put it!
So whether a hash is "good enough" for you depends on the consequences of a collision. You could always add a unique id to a checksum/hash to get the best of both worlds.
Why don't you use a unique ID from your database?
The probability of two hashes will likely to collide depends on the hash size. MD5 produces 128-bit hash. So for 2128+1 number of hashes there will be at least one collision.
This number is 2160+1 for SHA1 and 2512+1 for SHA512.
Here this rule applies. The more the output bits the more uniqueness and more computation. So there is a trade off. What you have to do is to choose an optimal one.
Could two ~200 char bytearrays MD5-hash down to the same string?
Considering that there are more 200 byte strings than 32 byte strings (MD5 digests), that is guaranteed to be the case.
All hash functions have that problem, but some are more robust than MD5. Try SHA-1. git is using it for the same purpose.
It may happen that two MD5 hashes collides (are the same). In 1996, a flaw was found in MD5 algorithm, and cryptanalysts advised to switch to SHA-1 hashing algorithm.
So, I will advise you to switch to SHA-1 (40 characters). But do not worry: I doubt that your two pictures will get the same hash. I think you can assume this risk in your application.
As other said before. Hash doesnt give you what you need unless you are fine with risk of collision.
Database is helpful here.
You get unique index for each 200 long string. No collisions here, and you need to set your 200 long names to be indexed, in that way it will use extra memory but it will sort it for you making search very very fast. You get unique id which can be easily used for filenames.
I have'nt worked much on hashing algorithms but as per my understanding there is always a chance of collison in hashing algorithm i.e. two differnce object may be hashed to same hash value but it is guaranteed that every time a object will be hashed to same hash value. There are other techniques that may be used for this , like linear probing.

Resources