aes-gcm using libgcrypt api in C - c

I'm playing with libgcrypt (v1.6.1 on Gentoo x64) and i've already implemented (and tested thorugh the AEs test vectors) aes256-cbc and aes256-ctr. Now i am looking at aes256-gcm but i have some doubt about the workflow. Below there is a skeleton of a simple encryption program:
int main(void){
unsigned char TEST_KEY[] = {0x60,0x3d,0xeb,0x10,0x15,0xca,0x71,0xbe,0x2b,0x73,0xae,0xf0,0x85,0x7d,0x77,0x81,0x1f,0x35,0x2c,0x07,0x3b,0x61,0x08,0xd7,0x2d,0x98,0x10,0xa3,0x09,0x14,0xdf,0xf4};
unsigned char TEST_IV[] = {0x00,0x01,0x02,0x03,0x04,0x05,0x06,0x07,0x08,0x09,0x0a,0x0b,0x0c,0x0d,0x0e,0x0f};
unsigned char TEST_PLAINTEXT_1[] = {0x6b,0xc1,0xbe,0xe2,0x2e,0x40,0x9f,0x96,0xe9,0x3d,0x7e,0x11,0x73,0x93,0x17,0x2a};
unsigned char cipher[16] = {0};
int algo = -1, i;
const char *name = "aes256";
algo = gcry_cipher_map_name(name);
gcry_cipher_hd_t hd;
gcry_cipher_open(&hd, algo, GCRY_CIPHER_MODE_GCM, 0);
gcry_cipher_setkey(hd, TEST_KEY, 32);
gcry_cipher_setiv(hd, TEST_IV, 16);
gcry_cipher_encrypt(hd, cipher, 16, TEST_PLAINTEXT_1, 16);
char out[33];
for(i=0;i<16;i++){
sprintf(out+(i*2), "%02x", cipher[i]);
}
out[32] = '\0';
printf("%s\n", out);
gcry_cipher_close(hd);
return 0;
}
In GCM mode there want also these instruction:
gcry_cipher_authenticate (gcry cipher hd t h , const void * abuf , size t abuflen )
gcry_error_t gcry_cipher_gettag (gcry cipher hd t h , void * tag , size t taglen )
So the correct workflow of the encryption program is:
gcry_cipher_authenticate
gcry_cipher_encrypt
gcry_cipher_gettag
But what i haven't undestood is:
abuf is like a salt? (so have i to generate it using gcry_create_nonce or similar?)
If i want to encrypt a file, void *tag is what i have to write to the outfile?

1) gcry_cipher_authenticate is for supporting authenticated encryption with associated data. abuf is data that you need to authenticate but do not need to encrypt. For example, if you are sending a packet, you might want to encrypt the body, but you must send the header unencrypted for the packet to be delivered. The tag generated by the cipher will provide integrity for both the encrypted data and the data sent in plain.
2) The tag is used after decryption to make sure that the data has not been tampered with. You append the tag to the encrypted text. Note, that it is computed on encrypted data and associated (unencrypted) data, so you will need both when decrypting.
You can check these documents for more information on GCM:
http://csrc.nist.gov/groups/ST/toolkit/BCM/documents/proposedmodes/gcm/gcm-spec.pdf
http://csrc.nist.gov/publications/nistpubs/800-38D/SP-800-38D.pdf
Also, you can probably get faster answers to cryptography questions like this on http://crypto.stackexchange.com.

Related

Openssl in Windows Kernel Mode

Is there a way I can use the OpenSSL library in windows kernel mode? I want to make a windows filter driver that intercepts the read and write operations and encrypts and decrypts the buffers. I already made a driver that can replace the buffers with some arbitrary content but now I need to encrypt the original content. I tried to include the OpenSSL dll-s in the project and i can compile and install the driver but when I try to start it I get this error
System error 2 has occurred.
The system cannot find the file specified.
This is the code that does the encryption. I know its not safe to use a static key but its just for testing.
void encrypt_aes_ctr(unsigned char* key, unsigned char* iv, unsigned char* data, unsigned char* out_data,int in_len,int*out_len)
{
int len;
EVP_CIPHER_CTX* ctx = EVP_CIPHER_CTX_new();;
EVP_EncryptInit_ex(ctx, EVP_aes_256_ctr(), NULL, key, iv);
EVP_EncryptUpdate(ctx, out_data, out_len, data, in_len);
EVP_EncryptFinal_ex(ctx, out_data + *out_len, &len);
*out_len += len;
}
And this is the call I make in the SwapPreWriteBuffers function
unsigned char key[32] = { 1,2,3,4,5,6,7,8,9,10,11,12,1,2,3,4,5,6,7,8,9,10,11,12,1,2,3,4,5,6,7,8 };
unsigned char iv[16] = { 1,2,3,4,5,6,7,8,9,10,11,12,1,2,3,4 };
int len;
encrypt_aes_ctr(key, iv, origBuf, newBuf, writeLen, &len);
You should use CNG API which is Microsoft Standard API For Crypto.
for example for Encrypt
Here is Code Example that use BCrypt in Kernel (Random)

OpenSSL EVP_aes_128_cbc decryption got unexpected result

I encrypted a string with EVP_aes_128_cbc cipher, then changed the 1st byte of the ciphertext, and decrypt this changed ciphertext. Unexpectly, it didn't decrypt error or get a fully wrong result, but got a wrong 1st 16 bytes and same string of rest. Here is the encrypt func:
int do_crypt1(unsigned char *in, unsigned char *outbuf, int inlen, int do_encrypt)
{
unsigned char inbuf[1024];
int outlen;
EVP_CIPHER_CTX *ctx;
/*
* Bogus key and IV: we'd normally set these from
* another source.
*/
unsigned char key[32] = "test";
unsigned char iv[] = "1234567812345678";
/* Don't set key or IV right away; we want to check lengths */
ctx = EVP_CIPHER_CTX_new();
EVP_CipherInit_ex(ctx, EVP_aes_128_cbc(), NULL, NULL, NULL,
do_encrypt);
OPENSSL_assert(EVP_CIPHER_CTX_key_length(ctx) == 16);
OPENSSL_assert(EVP_CIPHER_CTX_iv_length(ctx) == 16);
/* Now we can set key and IV */
EVP_CipherInit_ex(ctx, NULL, NULL, key, iv, do_encrypt);
int update_len = 0;
for (;;) {
memcpy(inbuf, in, inlen);
if (inlen <= 0)
break;
if (!EVP_CipherUpdate(ctx, outbuf, &outlen, inbuf, inlen)) {
/* Error */
EVP_CIPHER_CTX_free(ctx);
return 0;
}
update_len += outlen;
if(inlen <= 1024)
break;
}
if (!EVP_CipherFinal_ex(ctx, outbuf+outlen, &outlen)) {
/* Error */
EVP_CIPHER_CTX_free(ctx);
return 0;
}
EVP_CIPHER_CTX_free(ctx);
update_len += outlen;
return update_len;
}
and in the main:
unsigned char* b = (unsigned char*)malloc(1024);
unsigned char* hmac_code = (unsigned char*)malloc(1024);
int len = do_crypt1(value, hmac_code, strlen(value), AES_ENCRYPT);
printf("%s\n", value);
for (i = 0; i < len; i++)
printf("%x", *(hmac_code + i));
printf("\n");
printf("%d\n", len);
// printf("%s\n", hmac_code);
*hmac_code = 0x23;
len = do_crypt1(hmac_code, b, len, AES_DECRYPT);
printf("%d\n", len);
printf("%s\n", b);
free(hmac_code);
free(b);
result
Could anyone give me the reason and how to resolve this?
Actually as long as your plaintext is at least 17 bytes, the first 17 decrypted bytes should be wrong -- but since you're displaying the invalid decryption as text, some of the bytes may be invisible: in your example, the output is 2 chars shorter, so clearly 2 chars of the first 17 aren't visible. That's the expected result for damage in (at the beginning of) the first block of a 2-block (or more) CBC ciphertext for a 16-byte-block cipher like AES; see the diagram in wikipedia to understand why.
If you want detection of damage to the ciphertext, don't use CBC mode, or at least don't use it alone. Common/standard solutions nowadays are:
add authentication, such as HMAC. (Intriguingly your code already uses the variable name hmac_code even though you don't do anything even remotely related to HMAC.)
use an authenticated-encryption mode, like GCM, which effectively combines encryption and (some type of) MAC internally. Most authenticated modes today, including GCM, also support 'additional' or 'associated' data that is authenticated but not encrypted, and as a result are called AEAD, but you may or may not care about that.
use an error-propagating mode. These were popular decades ago, around the time of presidents Nixon, Ford, Carter, and Reagan, but are now mostly considered obsolete and are not directly supported by OpenSSL.
Also, BTW, your code is completely pants for values longer than 1024 bytes, which your test clearly doesn't exercise. First of all you don't need to break up such a value into chunks at all, but if for some reason you want to, the method you implemented is wrong.
Plus, the IV for CBC should be different and unpredictable every time; using a hardcoded value like this exposes you to two different classes of attacks. In general the advice you will get on Stacks actually related to security is 'don't roll you own'. Cryptographic code written by people who don't know what they're doing, even if/when it produces the correct output, is usually insecure. This is the major difference between crypto/security software and others; you can easily see if your editor or spreadsheet or database produces correct or incorrect output, and as long as the output is correct that's usually all you need, but you can't tell by looking at the output from an encryption/decryption program whether it is secure or not.

Encrypt bytes in a C struct, using OpenSSL RC4

TL;DR...
I need to encrypt an in-RAM C struct, byte-for-byte, using OpenSSL / EVP RC4 stream-cipher. How do I implement EVP (e.g. EVP_CipherUpdate) to accomplish the actual encryption of bytes in the struct?
The Details...
I have a ISAM/BTree database that needs its records/data encrypted. Don't worry about that, just know that each "record" is a C struct with many members (fields). This has been working for like 15 years (don't ask, the codebase is from the K&R C days, I think). The ISAM overhead simply takes a bytestream (the struct w/data) as an argument when writing the record... specifically, the ISAM's "write" function accepts a pointer to the data/structure.
Anyway, I'm trying to implement an OpenSSL stream cipher (RC4) -via the EVP framework- that can just sit between my program and the ISAM, so as to simply hand the ISAM my encrypted bytes, without him knowing or caring. I might add that I think the ISAM doesn't care about the structure of the data or even that it's a struct... it just gets raw data, I believe.
My struct is like this simplified example (in reality there are many more varied members):
typedef struct
UCHAR flag;
char field2[30];
int qty;
} DB_REC;
How would I (if it's even possible) go about encrypting the entire structure (in-place, even), byte for byte, if necessary? I've tried testing with simple strings, even; but can't get that to work, either.
I have another file called crypto.c (and .h) where I'm building my functions to encrypt and decrypt whatever I "pass" to them (might be a string, a struct, whatever - that's why my arg is void). For example:
void encrypt_db_rawData(void *data_to_encrypt, size_t data_size)
{
unsigned char buf_out[data_size];
int buf_out_byteCount = 0;
buf_out_byteCount = sizeof(buf_out);
EVP_CIPHER_CTX ctx; //declare an EVP context
int keyBytes = 16; //size of RC4 key in bytes
/* ... my_keystr is acquired from a file elsewhere ... */
/* ... my_iv is a global defined elsewhere ... */
EVP_CIPHER_CTX_init(&ctx);
EVP_CipherInit_ex(&ctx, EVP_rc4(), NULL, NULL, NULL, 1);
EVP_CIPHER_CTX_set_key_length(&ctx, keyBytes);
EVP_CipherInit_ex(&ctx, NULL, NULL, my_keystr, my_iv, 1);
int byte_i;
for( byte_i = 0; byte_i < sizeof(data_to_encrypt); i++ )
{
EVP_CipherUpdate(&ctx, buf_out, &buf_out_byteCount, data_to_encrypt[byte_i], 1);
}
*data_to_encrypt = buf_out; //overwrite the original data with its encrypted version
}
C is not my first language ~ especially C89 (or older perhaps), and I'm in way over my head on this one - this got thrown into my lap due to some restructuring; so I appreciate any constructive help anyone can offer. I'm in pointer-HELL!

Different encryption results between platforms, using OpenSSL

I'm working on a piece of cross-platform (Windows and Mac OS X) code in C that needs to encrypt / decrypt blobs using AES-256 with CBC and a blocksize of 128 bits. Among various libraries and APIs I've chosen OpenSSL.
This piece of code will then upload the blob using a multipart-form PUT to a server which then decrypts it using the same settings in .NET's crypto framework (Aes, CryptoStream, etc...).
The problem I'm facing is that the server decryption works fine when the local encryption is done on Windows but it fails when the encryption is done on Mac OS X - the server throws a "Padding is invalid and cannot be removed exception".
I've looked at this from many perspectives:
I verified that the transportation is correct - the byte array received on the server's decrypt method is exactly the same that is sent from Mac OS X and Windows
The actual content of the encrypted blob, for the same key, is different between Windows and Mac OS X. I tested this using a hardcoded key and run this patch on Windows and Mac OS X for the same blob
I'm sure the padding the correct, since it is taken care of by OpenSSL and since the same code works for Windows. Even so, I tried implementing the padding scheme as it is in Microsoft's reference source for .NET but still, no go
I verified that the IV is the same for Windows and Mac OS X (I thought maybe there was a problem with some of the special characters such as ETB that appear in the IV, but there wasn't)
I've tried LibreSSL and mbedtls, with no positive results. In mbedtls I also had to implement padding because, as far as I know, padding is the responsibility of the API's user
I've been at this problem for almost two weeks now and I'm starting to pull my (ever scarce) hair out
As a frame of reference, I'll post the C client's code for encrypting and the server's C# code for decrypting. Some minor details on the server side will be omitted (they don't interfere with the crypto code).
Client:
/*++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++*/
void
__setup_aes(EVP_CIPHER_CTX *ctx, const char *key, qvr_bool encrypt)
{
static const char *iv = ""; /* for security reasons, the actual IV is omitted... */
if (encrypt)
EVP_EncryptInit(ctx, EVP_aes_256_cbc(), key, iv);
else
EVP_DecryptInit(ctx, EVP_aes_256_cbc(), key, iv);
}
/*++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++*/
void
__encrypt(void *buf,
size_t buflen,
const char *key,
unsigned char **outbuf,
size_t *outlen)
{
EVP_CIPHER_CTX ctx;
int blocklen = 0;
int finallen = 0;
int remainder = 0;
__setup_aes(&ctx, key, QVR_TRUE);
EVP_CIPHER *c = ctx.cipher;
blocklen = EVP_CIPHER_CTX_block_size(&ctx);
//*outbuf = (unsigned char *) malloc((buflen + blocklen - 1) / blocklen * blocklen);
remainder = buflen % blocklen;
*outlen = remainder == 0 ? buflen : buflen + blocklen - remainder;
*outbuf = (unsigned char *) calloc(*outlen, sizeof(unsigned char));
EVP_EncryptUpdate(&ctx, *outbuf, outlen, buf, buflen);
EVP_EncryptFinal_ex(&ctx, *outbuf + *outlen, &finallen);
EVP_CIPHER_CTX_cleanup(&ctx);
//*outlen += finallen;
}
Server:
static Byte[] Decrypt(byte[] input, byte[] key, byte[] iv)
{
try
{
// Check arguments.
if (input == null || input.Length <= 0)
throw new ArgumentNullException("input");
if (key == null || key.Length <= 0)
throw new ArgumentNullException("key");
if (iv == null || iv.Length <= 0)
throw new ArgumentNullException("iv");
byte[] unprotected;
using (var encryptor = Aes.Create())
{
encryptor.Key = key;
encryptor.IV = iv;
using (var msInput = new MemoryStream(input))
{
msInput.Position = 0;
using (
var cs = new CryptoStream(msInput, encryptor.CreateDecryptor(),
CryptoStreamMode.Read))
using (var data = new BinaryReader(cs))
using (var outStream = new MemoryStream())
{
byte[] buf = new byte[2048];
int bytes = 0;
while ((bytes = data.Read(buf, 0, buf.Length)) != 0)
outStream.Write(buf, 0, bytes);
return outStream.ToArray();
}
}
}
}
catch (Exception ex)
{
throw ex;
}
}
Does anyone have any clue as to why this could possibly be happening? For reference, this is the .NET method from Microsoft's reference source .sln that (I think) does the decryption: https://gist.github.com/Metaluim/fcf9a4f1012fdeb2a44f#file-rijndaelmanagedtransform-cs
OpenSSL version differences are messy. First I suggest you explicitly force and veryify the key lengths, keys, IVs and encryption modes on both sides. I don't see that in the code. Then I would suggest you decrypt on the server side without padding. This will always succeed, and then you can inspect the last block whether it is what you expect.
Do this with the Windows-Encryption and MacOS-Encryption variant and you will see a difference, most likely in the padding.
The outlen padding in the C++ code looks odd. Encrypting a 16 byte long plaintext results in 32 bytes of ciphertext, but you only provide a 16 byte long buffer. This will not work. You will write out of bounds. Maybe it works just by chance on Windows because of a more generous memory layout and fails on MacOS.
AES padding scheme has been changed between OpenSSL versions 0.9.8* and 1.0.1* (at least between 0.9.8r and 1.0.1j). If two of your modules uses these different versions of OpenSSL then this could be the reason for your problem. To verify this first check OpenSSL versions. If you hit the described case you may consider on aligning the padding scheme to be the same.

C Libmcrypt cannot encrypt/decrypt successfully

I am working with libmcrypt in c and attempting to implement a simple test of encryption and decryption using rijndael-256 as the algorithm of choice. I have mirrored this test implementation pretty closely to the man pages examples with rijndael as opposed to their chosen algorithms. When compiled with the string gcc -o encryption_test main.c -lmcrypt, the following source code produces output similar to:
The encrypted message buffer contains j��A��8 �qj��%`��jh���=ZЁ�j
The original string was ��m"�C��D�����Y�G�v6��s��zh�
Obviously, the decryption part is failing, but as it is just a single function call it leads me to believe the encryption scheme is not behaving correctly as well. I have several questions for the libmcrypt gurus out there if you could point me in the right direction.
First, what is causing this code to produce this broken output?
Second, when dealing with mandatory fixed-sizes such as the key size and block-size, for example a 256-bit key does the function expect 32-bytes of key + a trailing null byte, 31-bytes of key + a trailing null byte, or 32-bytes of key with the 33rd byte being irrelevant? The same question holds true for block-size as well.
Lastly, one of the examples I noted used mhash to generate a hash of the key-text to supply to the encryption call, this is of course preferable but it was commented out and linking in mhash seems to fail. What is the accepted way of handling this type of key-conversion when working with libmcrypt? I have chosen to leave any such complexities out as to prevent further complicating already broken code, but I would like to incorporate this into the final design. Below is the source code in question:
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#include <mcrypt.h>
int main(int argc, char *argv[])
{
MCRYPT mfd;
char *key;
char *plaintext;
char *IV;
unsigned char *message, *buffered_message, *ptr;
int i, blocks, key_size = 32, block_size = 32;
message = "Test Message";
/** Buffer message for encryption */
blocks = (int) (strlen(message) / block_size) + 1;
buffered_message = calloc(1, (blocks * block_size));
key = calloc(1, key_size);
strcpy(key, "&*GHLKPK7G1SD4CF%6HJ0(IV#X6f0(PK");
mfd = mcrypt_module_open(MCRYPT_RIJNDAEL_256, NULL, "cbc", NULL);
if(mfd == MCRYPT_FAILED)
{
printf("Mcrypt module open failed.\n");
return 1;
}
/** Generate random IV */
srand(time(0));
IV = malloc(mcrypt_enc_get_iv_size(mfd));
for(i = 0; i < mcrypt_enc_get_iv_size(mfd); i++)
{
IV[i] = rand();
}
/** Initialize cipher with key and IV */
i = mcrypt_generic_init(mfd, key, key_size, IV);
if(i < 0)
{
mcrypt_perror(i);
return 1;
}
strncpy(buffered_message, message, strlen(message));
mcrypt_generic(mfd, buffered_message, block_size);
printf("The encrypted message buffer contains %s\n", buffered_message);
mdecrypt_generic(mfd, buffered_message, block_size);
printf("The original string was %s\n", buffered_message);
mcrypt_generic_deinit(mfd);
mcrypt_module_close(mfd);
return 0;
}
You need to re-initialize the descriptor mfd for decryption, you cannot use the same descriptor for both encryption and decryption.

Resources