OpenSSL EVP_aes_128_cbc decryption got unexpected result - c

I encrypted a string with EVP_aes_128_cbc cipher, then changed the 1st byte of the ciphertext, and decrypt this changed ciphertext. Unexpectly, it didn't decrypt error or get a fully wrong result, but got a wrong 1st 16 bytes and same string of rest. Here is the encrypt func:
int do_crypt1(unsigned char *in, unsigned char *outbuf, int inlen, int do_encrypt)
{
unsigned char inbuf[1024];
int outlen;
EVP_CIPHER_CTX *ctx;
/*
* Bogus key and IV: we'd normally set these from
* another source.
*/
unsigned char key[32] = "test";
unsigned char iv[] = "1234567812345678";
/* Don't set key or IV right away; we want to check lengths */
ctx = EVP_CIPHER_CTX_new();
EVP_CipherInit_ex(ctx, EVP_aes_128_cbc(), NULL, NULL, NULL,
do_encrypt);
OPENSSL_assert(EVP_CIPHER_CTX_key_length(ctx) == 16);
OPENSSL_assert(EVP_CIPHER_CTX_iv_length(ctx) == 16);
/* Now we can set key and IV */
EVP_CipherInit_ex(ctx, NULL, NULL, key, iv, do_encrypt);
int update_len = 0;
for (;;) {
memcpy(inbuf, in, inlen);
if (inlen <= 0)
break;
if (!EVP_CipherUpdate(ctx, outbuf, &outlen, inbuf, inlen)) {
/* Error */
EVP_CIPHER_CTX_free(ctx);
return 0;
}
update_len += outlen;
if(inlen <= 1024)
break;
}
if (!EVP_CipherFinal_ex(ctx, outbuf+outlen, &outlen)) {
/* Error */
EVP_CIPHER_CTX_free(ctx);
return 0;
}
EVP_CIPHER_CTX_free(ctx);
update_len += outlen;
return update_len;
}
and in the main:
unsigned char* b = (unsigned char*)malloc(1024);
unsigned char* hmac_code = (unsigned char*)malloc(1024);
int len = do_crypt1(value, hmac_code, strlen(value), AES_ENCRYPT);
printf("%s\n", value);
for (i = 0; i < len; i++)
printf("%x", *(hmac_code + i));
printf("\n");
printf("%d\n", len);
// printf("%s\n", hmac_code);
*hmac_code = 0x23;
len = do_crypt1(hmac_code, b, len, AES_DECRYPT);
printf("%d\n", len);
printf("%s\n", b);
free(hmac_code);
free(b);
result
Could anyone give me the reason and how to resolve this?

Actually as long as your plaintext is at least 17 bytes, the first 17 decrypted bytes should be wrong -- but since you're displaying the invalid decryption as text, some of the bytes may be invisible: in your example, the output is 2 chars shorter, so clearly 2 chars of the first 17 aren't visible. That's the expected result for damage in (at the beginning of) the first block of a 2-block (or more) CBC ciphertext for a 16-byte-block cipher like AES; see the diagram in wikipedia to understand why.
If you want detection of damage to the ciphertext, don't use CBC mode, or at least don't use it alone. Common/standard solutions nowadays are:
add authentication, such as HMAC. (Intriguingly your code already uses the variable name hmac_code even though you don't do anything even remotely related to HMAC.)
use an authenticated-encryption mode, like GCM, which effectively combines encryption and (some type of) MAC internally. Most authenticated modes today, including GCM, also support 'additional' or 'associated' data that is authenticated but not encrypted, and as a result are called AEAD, but you may or may not care about that.
use an error-propagating mode. These were popular decades ago, around the time of presidents Nixon, Ford, Carter, and Reagan, but are now mostly considered obsolete and are not directly supported by OpenSSL.
Also, BTW, your code is completely pants for values longer than 1024 bytes, which your test clearly doesn't exercise. First of all you don't need to break up such a value into chunks at all, but if for some reason you want to, the method you implemented is wrong.
Plus, the IV for CBC should be different and unpredictable every time; using a hardcoded value like this exposes you to two different classes of attacks. In general the advice you will get on Stacks actually related to security is 'don't roll you own'. Cryptographic code written by people who don't know what they're doing, even if/when it produces the correct output, is usually insecure. This is the major difference between crypto/security software and others; you can easily see if your editor or spreadsheet or database produces correct or incorrect output, and as long as the output is correct that's usually all you need, but you can't tell by looking at the output from an encryption/decryption program whether it is secure or not.

Related

Different encryption results between platforms, using OpenSSL

I'm working on a piece of cross-platform (Windows and Mac OS X) code in C that needs to encrypt / decrypt blobs using AES-256 with CBC and a blocksize of 128 bits. Among various libraries and APIs I've chosen OpenSSL.
This piece of code will then upload the blob using a multipart-form PUT to a server which then decrypts it using the same settings in .NET's crypto framework (Aes, CryptoStream, etc...).
The problem I'm facing is that the server decryption works fine when the local encryption is done on Windows but it fails when the encryption is done on Mac OS X - the server throws a "Padding is invalid and cannot be removed exception".
I've looked at this from many perspectives:
I verified that the transportation is correct - the byte array received on the server's decrypt method is exactly the same that is sent from Mac OS X and Windows
The actual content of the encrypted blob, for the same key, is different between Windows and Mac OS X. I tested this using a hardcoded key and run this patch on Windows and Mac OS X for the same blob
I'm sure the padding the correct, since it is taken care of by OpenSSL and since the same code works for Windows. Even so, I tried implementing the padding scheme as it is in Microsoft's reference source for .NET but still, no go
I verified that the IV is the same for Windows and Mac OS X (I thought maybe there was a problem with some of the special characters such as ETB that appear in the IV, but there wasn't)
I've tried LibreSSL and mbedtls, with no positive results. In mbedtls I also had to implement padding because, as far as I know, padding is the responsibility of the API's user
I've been at this problem for almost two weeks now and I'm starting to pull my (ever scarce) hair out
As a frame of reference, I'll post the C client's code for encrypting and the server's C# code for decrypting. Some minor details on the server side will be omitted (they don't interfere with the crypto code).
Client:
/*++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++*/
void
__setup_aes(EVP_CIPHER_CTX *ctx, const char *key, qvr_bool encrypt)
{
static const char *iv = ""; /* for security reasons, the actual IV is omitted... */
if (encrypt)
EVP_EncryptInit(ctx, EVP_aes_256_cbc(), key, iv);
else
EVP_DecryptInit(ctx, EVP_aes_256_cbc(), key, iv);
}
/*++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++*/
void
__encrypt(void *buf,
size_t buflen,
const char *key,
unsigned char **outbuf,
size_t *outlen)
{
EVP_CIPHER_CTX ctx;
int blocklen = 0;
int finallen = 0;
int remainder = 0;
__setup_aes(&ctx, key, QVR_TRUE);
EVP_CIPHER *c = ctx.cipher;
blocklen = EVP_CIPHER_CTX_block_size(&ctx);
//*outbuf = (unsigned char *) malloc((buflen + blocklen - 1) / blocklen * blocklen);
remainder = buflen % blocklen;
*outlen = remainder == 0 ? buflen : buflen + blocklen - remainder;
*outbuf = (unsigned char *) calloc(*outlen, sizeof(unsigned char));
EVP_EncryptUpdate(&ctx, *outbuf, outlen, buf, buflen);
EVP_EncryptFinal_ex(&ctx, *outbuf + *outlen, &finallen);
EVP_CIPHER_CTX_cleanup(&ctx);
//*outlen += finallen;
}
Server:
static Byte[] Decrypt(byte[] input, byte[] key, byte[] iv)
{
try
{
// Check arguments.
if (input == null || input.Length <= 0)
throw new ArgumentNullException("input");
if (key == null || key.Length <= 0)
throw new ArgumentNullException("key");
if (iv == null || iv.Length <= 0)
throw new ArgumentNullException("iv");
byte[] unprotected;
using (var encryptor = Aes.Create())
{
encryptor.Key = key;
encryptor.IV = iv;
using (var msInput = new MemoryStream(input))
{
msInput.Position = 0;
using (
var cs = new CryptoStream(msInput, encryptor.CreateDecryptor(),
CryptoStreamMode.Read))
using (var data = new BinaryReader(cs))
using (var outStream = new MemoryStream())
{
byte[] buf = new byte[2048];
int bytes = 0;
while ((bytes = data.Read(buf, 0, buf.Length)) != 0)
outStream.Write(buf, 0, bytes);
return outStream.ToArray();
}
}
}
}
catch (Exception ex)
{
throw ex;
}
}
Does anyone have any clue as to why this could possibly be happening? For reference, this is the .NET method from Microsoft's reference source .sln that (I think) does the decryption: https://gist.github.com/Metaluim/fcf9a4f1012fdeb2a44f#file-rijndaelmanagedtransform-cs
OpenSSL version differences are messy. First I suggest you explicitly force and veryify the key lengths, keys, IVs and encryption modes on both sides. I don't see that in the code. Then I would suggest you decrypt on the server side without padding. This will always succeed, and then you can inspect the last block whether it is what you expect.
Do this with the Windows-Encryption and MacOS-Encryption variant and you will see a difference, most likely in the padding.
The outlen padding in the C++ code looks odd. Encrypting a 16 byte long plaintext results in 32 bytes of ciphertext, but you only provide a 16 byte long buffer. This will not work. You will write out of bounds. Maybe it works just by chance on Windows because of a more generous memory layout and fails on MacOS.
AES padding scheme has been changed between OpenSSL versions 0.9.8* and 1.0.1* (at least between 0.9.8r and 1.0.1j). If two of your modules uses these different versions of OpenSSL then this could be the reason for your problem. To verify this first check OpenSSL versions. If you hit the described case you may consider on aligning the padding scheme to be the same.

aes-gcm using libgcrypt api in C

I'm playing with libgcrypt (v1.6.1 on Gentoo x64) and i've already implemented (and tested thorugh the AEs test vectors) aes256-cbc and aes256-ctr. Now i am looking at aes256-gcm but i have some doubt about the workflow. Below there is a skeleton of a simple encryption program:
int main(void){
unsigned char TEST_KEY[] = {0x60,0x3d,0xeb,0x10,0x15,0xca,0x71,0xbe,0x2b,0x73,0xae,0xf0,0x85,0x7d,0x77,0x81,0x1f,0x35,0x2c,0x07,0x3b,0x61,0x08,0xd7,0x2d,0x98,0x10,0xa3,0x09,0x14,0xdf,0xf4};
unsigned char TEST_IV[] = {0x00,0x01,0x02,0x03,0x04,0x05,0x06,0x07,0x08,0x09,0x0a,0x0b,0x0c,0x0d,0x0e,0x0f};
unsigned char TEST_PLAINTEXT_1[] = {0x6b,0xc1,0xbe,0xe2,0x2e,0x40,0x9f,0x96,0xe9,0x3d,0x7e,0x11,0x73,0x93,0x17,0x2a};
unsigned char cipher[16] = {0};
int algo = -1, i;
const char *name = "aes256";
algo = gcry_cipher_map_name(name);
gcry_cipher_hd_t hd;
gcry_cipher_open(&hd, algo, GCRY_CIPHER_MODE_GCM, 0);
gcry_cipher_setkey(hd, TEST_KEY, 32);
gcry_cipher_setiv(hd, TEST_IV, 16);
gcry_cipher_encrypt(hd, cipher, 16, TEST_PLAINTEXT_1, 16);
char out[33];
for(i=0;i<16;i++){
sprintf(out+(i*2), "%02x", cipher[i]);
}
out[32] = '\0';
printf("%s\n", out);
gcry_cipher_close(hd);
return 0;
}
In GCM mode there want also these instruction:
gcry_cipher_authenticate (gcry cipher hd t h , const void * abuf , size t abuflen )
gcry_error_t gcry_cipher_gettag (gcry cipher hd t h , void * tag , size t taglen )
So the correct workflow of the encryption program is:
gcry_cipher_authenticate
gcry_cipher_encrypt
gcry_cipher_gettag
But what i haven't undestood is:
abuf is like a salt? (so have i to generate it using gcry_create_nonce or similar?)
If i want to encrypt a file, void *tag is what i have to write to the outfile?
1) gcry_cipher_authenticate is for supporting authenticated encryption with associated data. abuf is data that you need to authenticate but do not need to encrypt. For example, if you are sending a packet, you might want to encrypt the body, but you must send the header unencrypted for the packet to be delivered. The tag generated by the cipher will provide integrity for both the encrypted data and the data sent in plain.
2) The tag is used after decryption to make sure that the data has not been tampered with. You append the tag to the encrypted text. Note, that it is computed on encrypted data and associated (unencrypted) data, so you will need both when decrypting.
You can check these documents for more information on GCM:
http://csrc.nist.gov/groups/ST/toolkit/BCM/documents/proposedmodes/gcm/gcm-spec.pdf
http://csrc.nist.gov/publications/nistpubs/800-38D/SP-800-38D.pdf
Also, you can probably get faster answers to cryptography questions like this on http://crypto.stackexchange.com.

How to encrypt/decrypt long input messages with RSA? [Openssl, C]

I wrote a simple test program that encrypts/decrypts a message.
I have a keylength:
int keylength = 1024; // it can also be 2048, 4096
and max input length:
int maxlen = (keylength/8)-11;
and I know that my input size should be < than maxlen, something like this:
if(insize >= maxlen)
printf("cannot encrypt/decrypt!\n");
My question is simple - is it possible (if so, how can I do this) to encrypt/decrypt with RSA messages LONGER than maxlen?
Main code is also, very simple but works only when insize < maxlen:
if((encBytes=RSA_public_encrypt(strlen(buff1)+1, buff1, buff2, keypair, RSA_PKCS1_PADDING)) == -1)
{
printf("error\n");
}
if((decBytes=RSA_private_decrypt(encBytes, buff2, buff3, keypair, RSA_PKCS1_PADDING)) == -1)
{
printf("error\n");
}
Encrypting long messages requires combined scheme - RSA algorithm encrypts session key (i.e. AES key), and data itself is encrypted with that key.
I would recommend to not invent another bicycle and use well established scheme, i.e. PKCS#7/CMS or OpenPGP, depending on your needs.
You would be able to encrypt long messages with RSA the same way as it is done with block ciphers. That is, encrypt the messages in blocks and bind the blocks with an appropriate chaining mode. However, this is not the usual way to do it and you won't find support for it (RSA chaining) in the libraries you use.
Since RSA is quite slow, the usual way to encrypt large messages is using hybrid encryption. In hybrid encryption you use a fast symmetric encryption algorithm (like AES) for encrypting the data with a random key. The random key is then encrypted with RSA and send along with the symmetric key encrypted data.
EDIT:
As fore your implementation, you have insize = 1300 and keylength = 1024 which gives maxlen = 117. To encrypt the full message you those needs 12 encrypts, that each produce 128 bytes, giving an encrypted size of 1536 bytes. In your code you only allocates buffers of 1416 bytes. Also, you don't seem to allow for 128 bytes output as you only increment with 117 in:
RSA_public_encrypt(maxlen, buff1+i, buff2+i, keypair, RSA_PKCS1_PADDING)
and
i += maxlen;
You can use RSA as block cipher in that case. That is break the message to several blocks smaller than maxlen.
Otherwise impossible.
If you want to run RSA in a "block cipher" kind of mode, you would need to run it in a loop.
Like most of the other commenters, I'd like to point out that this is a bad use of RSA - You should just encrypt a AES key with RSA then use AES for the longer message.
However, I'm not one to let practicality get in the way of learning, so here's how you'd do it. This code isn't tested, since I don't know what libraries you are using. It's also a little overly-verbose, for readability.
int inLength = strlen(buff1)+1;
int numBlocks = (inLength / maxlen) + 1;
for (int i = 0; i < numBlocks; i++) {
int bytesDone = i * maxlen;
int remainingLen = inLength - bytesDone;
int thisLen; // The length of this block
if (remainingLen > maxlen) {
thisLen = maxlen;
} else {
thisLen = remainingLen;
}
if((encBytes=RSA_public_encrypt(thisLen, buff1 + bytesDone, buff2 + bytesDone, keypair, RSA_PKCS1_PADDING)) == -1)
{
printf("error\n");
}
// Okay, IDK what the first parameter to this should be. It depends on the library. You can figure it out, hopefully.
if((decBytes=RSA_private_decrypt(idk, buff2 + bytesDone, buff3 + bytesDone, keypair, RSA_PKCS1_PADDING)) == -1)
{
printf("error\n");
}
}
maxlen actually depends on a key length and padding mode. Think newer padding scheme ´OAEP´ e.g. in Java Encryption Engine takes additional 42 bytes instead of 11. Known libraries are not designed for using RSA in a block cipher mode.
For that purpose, beyond fragmentation as answered above, security aspects require further modification of the padding scheme, e,g, https://crypto.stackexchange.com/a/97974/98888

Different ciphertexts when I used OpenSSL AES command line tools and OpenSSL AES APIs?

Why do i got different ciphertexts when i used openssl aes command tools and openssl AES apis ?
I have used three types of encryption:
Type a) openssl command line tool
Type b) classes in javax.cryto
Type c) OpenSSL C api.
Using type (a) and (b), I got the same ciphertext. But I got different ciphertext when using (c).
I want to get the same ciphertexts when using method c and method a/b.
I think there's something wrong in type c, but I can't find it. Note that I used the same KEY,IV pair in the above three methods.
Type a:
openssl enc -aes-128-cbc -e -a -in pt.txt -out ct.txt -K 01010101010101010101010101010101 -iv 01010101010101010101010101010101 -p
Type b:
Java code using javax.crypto. I won't paste the code, because this way I got the same ciphertext with Type a.
Type c:
C code using OpenSSL API:
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#include <openssl/aes.h>
int main(int argc, char** argv) {
AES_KEY aes;
unsigned char key[AES_BLOCK_SIZE]; // AES_BLOCK_SIZE = 16
unsigned char iv[AES_BLOCK_SIZE]; // init vector
unsigned char* input_string;
unsigned char* encrypt_string;
unsigned char* decrypt_string;
unsigned int len; // encrypt length (in multiple of AES_BLOCK_SIZE)
unsigned int i;
// check usage
if (argc != 2) {
fprintf(stderr, "%s <plain text>\n", argv[0]);
exit(-1);
}
// set the encryption length
len = 0;
if ( strlen(argv[1])>=AES_BLOCK_SIZE ||
(strlen(argv[1]) + 1) % AES_BLOCK_SIZE == 0) {
len = strlen(argv[1]) + 1;
} else {
len = ((strlen(argv[1]) + 1) / AES_BLOCK_SIZE + 1) * AES_BLOCK_SIZE;
}
// set the input string
input_string = (unsigned char*)calloc(len, sizeof(unsigned char));
if (input_string == NULL) {
fprintf(stderr, "Unable to allocate memory for input_string\n");
exit(-1);
}
strncpy((char*)input_string, argv[1], strlen(argv[1]));
// Generate AES 128-bit key
memset(key, 0x01, AES_BLOCK_SIZE);
// Set encryption key
memset(iv, 0x01, AES_BLOCK_SIZE);
if (AES_set_encrypt_key(key, 128, &aes) < 0) {
fprintf(stderr, "Unable to set encryption key in AES\n");
exit(-1);
}
// alloc encrypt_string
encrypt_string = (unsigned char*)calloc(len, sizeof(unsigned char));
if (encrypt_string == NULL) {
fprintf(stderr, "Unable to allocate memory for encrypt_string\n");
exit(-1);
}
// encrypt (iv will change)
AES_cbc_encrypt(input_string, encrypt_string, len, &aes, iv, AES_ENCRYPT);
/////////////////////////////////////
// alloc decrypt_string
decrypt_string = (unsigned char*)calloc(len, sizeof(unsigned char));
if (decrypt_string == NULL) {
fprintf(stderr, "Unable to allocate memory for decrypt_string\n");
exit(-1);
}
// Set decryption key
memset(iv, 0x01, AES_BLOCK_SIZE);
if (AES_set_decrypt_key(key, 128, &aes) < 0) {
fprintf(stderr, "Unable to set decryption key in AES\n");
exit(-1);
}
// decrypt
AES_cbc_encrypt(encrypt_string, decrypt_string, len, &aes, iv,
AES_DECRYPT);
// print
printf("input_string =%s\n", input_string);
printf("encrypted string =");
for (i=0; i<len; ++i) {
printf("%u ", encrypt_string[i]);
}
printf("\n");
printf("decrypted string =%s\n", decrypt_string);
return 0;
}
What could be the reason for different outputs?
In your C code, you are essentially using zero-padding: You allocate a memory area filled by zeros (by calloc), and then copy the plain text into this area, leaving the zeros at the end intact.
The openssl enc uses different padding than your C code. The documentation for openssl enc says (emphasis by me):
All the block ciphers normally use PKCS#5 padding also known as standard
block padding: this allows a rudimentary integrity or password check to be performed.
However since the chance of random data passing the test is better than 1 in 256 it
isn't a very good test.
In addition, the openssl enc command uses a salt by default, which randomizes the ciphertext. The salt serves a similar purpose as a per-message Initialization Vector (IV). But you are using an explicit IV, so the salt is not randomizing the ciphertext.
The documentation for javax.crypto.Cipher (which I suppose you used) says:
A transformation is of the form:
"algorithm/mode/padding" or
"algorithm"
(in the latter case, provider-specific default values for the mode and padding
scheme are used). For example, the following is a valid transformation:
Cipher c = Cipher.getInstance("DES/CBC/PKCS5Padding");
So if you simply are using AES or ARS/CBC without indicating the padding mode, it uses whatever it finds fitting, which in your case happened to be the same as what OpenSSL used (i.e. PKCS#5 padding).
To change your C program, you'll have to do the same padding yourself (essentially, it is filling the block with a number x of bytes, all of which have the same value as this number, while appending a whole block filled with 16 when the last block is already full) - or use the higher level EVP-functions, which should provide you with a way to specify the padding mode to the cipher.

C Libmcrypt cannot encrypt/decrypt successfully

I am working with libmcrypt in c and attempting to implement a simple test of encryption and decryption using rijndael-256 as the algorithm of choice. I have mirrored this test implementation pretty closely to the man pages examples with rijndael as opposed to their chosen algorithms. When compiled with the string gcc -o encryption_test main.c -lmcrypt, the following source code produces output similar to:
The encrypted message buffer contains j��A��8 �qj��%`��jh���=ZЁ�j
The original string was ��m"�C��D�����Y�G�v6��s��zh�
Obviously, the decryption part is failing, but as it is just a single function call it leads me to believe the encryption scheme is not behaving correctly as well. I have several questions for the libmcrypt gurus out there if you could point me in the right direction.
First, what is causing this code to produce this broken output?
Second, when dealing with mandatory fixed-sizes such as the key size and block-size, for example a 256-bit key does the function expect 32-bytes of key + a trailing null byte, 31-bytes of key + a trailing null byte, or 32-bytes of key with the 33rd byte being irrelevant? The same question holds true for block-size as well.
Lastly, one of the examples I noted used mhash to generate a hash of the key-text to supply to the encryption call, this is of course preferable but it was commented out and linking in mhash seems to fail. What is the accepted way of handling this type of key-conversion when working with libmcrypt? I have chosen to leave any such complexities out as to prevent further complicating already broken code, but I would like to incorporate this into the final design. Below is the source code in question:
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#include <mcrypt.h>
int main(int argc, char *argv[])
{
MCRYPT mfd;
char *key;
char *plaintext;
char *IV;
unsigned char *message, *buffered_message, *ptr;
int i, blocks, key_size = 32, block_size = 32;
message = "Test Message";
/** Buffer message for encryption */
blocks = (int) (strlen(message) / block_size) + 1;
buffered_message = calloc(1, (blocks * block_size));
key = calloc(1, key_size);
strcpy(key, "&*GHLKPK7G1SD4CF%6HJ0(IV#X6f0(PK");
mfd = mcrypt_module_open(MCRYPT_RIJNDAEL_256, NULL, "cbc", NULL);
if(mfd == MCRYPT_FAILED)
{
printf("Mcrypt module open failed.\n");
return 1;
}
/** Generate random IV */
srand(time(0));
IV = malloc(mcrypt_enc_get_iv_size(mfd));
for(i = 0; i < mcrypt_enc_get_iv_size(mfd); i++)
{
IV[i] = rand();
}
/** Initialize cipher with key and IV */
i = mcrypt_generic_init(mfd, key, key_size, IV);
if(i < 0)
{
mcrypt_perror(i);
return 1;
}
strncpy(buffered_message, message, strlen(message));
mcrypt_generic(mfd, buffered_message, block_size);
printf("The encrypted message buffer contains %s\n", buffered_message);
mdecrypt_generic(mfd, buffered_message, block_size);
printf("The original string was %s\n", buffered_message);
mcrypt_generic_deinit(mfd);
mcrypt_module_close(mfd);
return 0;
}
You need to re-initialize the descriptor mfd for decryption, you cannot use the same descriptor for both encryption and decryption.

Resources