Using an SHA1 with Microsoft CAPI - c

I have an SHA1 hash and I need to sign it. The CryptSignHash() method requires a HCRYPTHASH handle for signing. I create it and as I have the actual hash value already then set it:
CryptCreateHash(cryptoProvider, CALG_SHA1, 0, 0, &hash);
CryptSetHashParam(hash, HP_HASHVAL, hashBytes, 0);
The hashBytes is an array of 20 bytes.
However the problem is that the signature produced from this HCRYPTHASH handle is incorrect. I traced the problem down to the fact that CAPI actually doesn't use all 20 bytes from my hashBytes array. For some reason it thinks that SHA1 is only 4 bytes.
To verify this I wrote this small program:
HCRYPTPROV cryptoProvider;
CryptAcquireContext(&cryptoProvider, NULL, NULL, PROV_RSA_FULL, 0);
HCRYPTHASH hash;
HCRYPTKEY keyForHash;
CryptCreateHash(cryptoProvider, CALG_SHA1, keyForHash, 0, &hash);
DWORD hashLength;
CryptGetHashParam(hash, HP_HASHSIZE, NULL, &hashLength, 0);
printf("hashLength: %d\n", hashLength);
And this prints out hashLength: 4 !
Can anyone explain what I am doing wrong or why Microsoft CAPI thinks that SHA1 is 4 bytes (32 bits) instead of 20 bytes (160 bits).

There are a small error in your code. Instead of
DWORD hashLength;
CryptGetHashParam(hash, HP_HASHSIZE, NULL, &hashLength, 0);
printf("hashLength: %d\n", hashLength);
you should use
DWORD hashLength, hashSize;
hashLength = sizeof(DWORD)
CryptGetHashParam(hash, HP_HASHSIZE, (PBYTE)&hashSize, &hashLength, 0);
printf("hashSize: %d\n", hashSize);
then you will receive 20 as expected.
The Usage of CryptSignHash after CryptSetHashParam must also work. See remark at the end of the description of CryptSetHashParam function at http://msdn.microsoft.com/en-us/library/aa380270(VS.85).aspx. I suppose you just made the same error as with CryptGetHashParam(..., HP_HASHSIZE, ...) during retrieving of the result of signing. Compare your code with the code from the description of CryptSignHash function http://msdn.microsoft.com/en-us/library/aa380280(VS.85).aspx.

I don't think you can use CryptCreateHash in that way. From MSDN:
"The CryptCreateHash function
initiates the hashing of a stream of
data."
In other words, it looks like you can't instantiate a hash context in any way other than empty (and then by having it hash your input data).
How do you have the hash at present - a byte array? If so you probably just want to sign that array; I'd look into CryptSignMessage or CryptSignMessageWithKey as likely to do the job.
(I'm guessing, but what you're seeing may be explained by the output hash length not being set up until after the hash context has done some work.)

Related

How to limit Berkeley DB's buffer?

I need to limit the maximum size of Berkeley's buffer. I've tried using the code below, but the buffer keeps growing.
DB_ENV *db_env;
u_int32_t env_flags;
char *DBNOMENV = "";
db_env_create(&db_env, 0);
db_env->set_cache_max(db_env, 0.1, 0);
env_flags = DB_CREATE | DB_INIT_MPOOL;
db_env->open(db_env, DBNOMENV, env_flags, 0);
DB *BDB_database;
db_create(&BDB_database, db_env, 0);
You want DB_ENV->set_cachesize: set_cachesize, from the Oracle docs.
What's the purpose of set_cache_max, when all it appears to do is limit the number you can specify with some other function call? You got me. There's probably some nuance here, but in practice, set_cache_max is only there to to add to the confusion.
Note that both of these functions only accept integer arguments for the sizing. You'll need set_cachesize(db_env, 0, 100*1024*1024, 1); to do what you were trying to do with that 0.1.
"Number of caches" should be 1.

Problem by Encryption & Decryption of text with EVP_des_ofb(), openSSL , C

I need to encrypt and decrypt txt-file with DES-ofb (libcrypto) using OpenSSL library, the key and Init Vector is given in one bin.file(key+iv). But after the decryption via EVP_DecryptUpdate(), decrypted text and plain text are not similar at all.
So I read plain.txt 8 bytes and a 'keyandIV.bin' files. Than I took first 8 bytes from keyandIVbuffer as a KEY for DES and the rest as IV. So I have 8 bytes key and 8 bytes IV, added '\0' at the end of both (Do I need '\0' here ? Key length must be 64 or 56 bits?).
This is my code for ercryption with DES ofb:
printf("ENCRYPTION:\n");
int howmany = 0, final1;
const EVP_CIPHER *CIPHER_TYPE = EVP_des_ofb();
EVP_CIPHER_CTX *ctx_encrypt = EVP_CIPHER_CTX_new();
EVP_CIPHER_CTX_init(ctx_encrypt);
EVP_EncryptInit(ctx_encrypt, CIPHER_TYPE, keybuf1, ivbuf1);
if(!EVP_EncryptUpdate(ctx_encrypt, ciphertextbuf1, &howmany, plaintextbuf1, plainlength1))return -1;
if(!EVP_EncryptUpdate(ctx_encrypt, ciphertextbuf1, &howmany, plaintextbuf1, plainlength1)) return -1;
EVP_EncryptFinal_ex(ctx_encrypt, ciphertextbuf1 + howmany , &final1);
EVP_CIPHER_CTX_cleanup(ctx_encrypt);
Than I took the encrypted buffer und decrypt it so:
printf("DECRYPTION:\n");
int final2;
EVP_CIPHER_CTX *ctx_decrypt = EVP_CIPHER_CTX_new();
EVP_CIPHER_CTX_init(ctx_decrypt);
EVP_DecryptInit(ctx_decrypt, CIPHER_TYPE, keybuf1, ivbuf1);
if(!EVP_DecryptUpdate(ctx_decrypt, decryptedtext, &howmany, ciphertextbuf1, strlen(ciphertextbuf1))) return -1;
if(!EVP_EncryptFinal_ex(ctx_decrypt, decryptedtext + howmany, &final2)) return -1;
EVP_CIPHER_CTX_cleanup(ctx_decrypt);
I definetly have understanding problem with DES. Maybe I did something wrong by creatimg key and IV from one file.I have seen plenty of examples but I still don't understand what I did wrong in my program.
The decryption sequence is EVP_DecryptInit_ex(), EVP_DecryptUpdate() and EVP_DecryptFinal_ex(). This follows EVP_EncryptInit_ex(), EVP_EncryptUpdate() and EVP_EncryptFinal_ex(). In your code you are calling EVP_EncryptFinal_ex() to decrypt, so obviously that's not going to work. Also, if something went wrong during an operation an error code should have been printed out to stderr.

CryptoApi to CommonCrypto

I have cryptographic code used in the Windows platform, which uses the Crypto API functions and need to convert this to using Common Crypto on OS X.
Essentially the original code is this, with error checking removed for brevity: -
CryptAcquireContext(&m_hProv, NULL, NULL, PROV_RSA_FULL, CRYPT_VERIFYCONTEXT));
CryptCreateHash(m_hProv, CALG_MD5 ,0, 0, &hHash);
CryptHashData(hHash,(LPBYTE)pszInputData, lstrlen(pszInputData)*sizeof(TCHAR), 0);
CryptDeriveKey(m_hProv, CALG_RC4, hHash, CRYPT_EXPORTABLE | 0x00280000, &m_hKey);
CryptDecrypt(m_hKey, 0, bFinal, 0, pData, pdwDataSize);
As far as I understand, this is what's happening: -
CryptAcquireContext - Get an object to handle the cryptography
CryptCreateHash - Create an MD5 hashing object
CryptHashData - Hash the input data with MD5
CryptDeriveKey, CryptDecrypt - Decode pData with RC4, using the key m_hKey
The size of pszInputData is 12 bytes and the output array of the MD5 hashed object is the same on both platforms.
To decode with RC4, I'm doing the following with Common Crypto: -
CCCryptorRef cryptor = NULL;
CCCryptorCreate(kCCDecrypt, kCCAlgorithmRC4, 0,
(void*)m_hKey.data(), m_hKey.length(), NULL, &cryptor);
char outBuffer[12];
size_t outBytes;
CCCryptorUpdate(cryptor, (void*)pData, *pdwDataSize, outBuffer, 12, &outBytes);
Testing the output (outBuffer array) from Common Crypto with an online RC4 decoder matches, so this is decoding correctly.
However, the final output from the Windows code in pData doesn't match the RC4 decoded in Common Crypto.
Are there some steps I'm missing or not understanding with the Windows Crypto API calls here; why do the outputs differ?
(Please note, I'm not looking for comments on the security or flaws in using RC4)
The problem turns out to be understanding how CryptDeriveKey uses the number of specified bits with the RC4 Decryption.
CryptDeriveKey(m_hProv, CALG_RC4, hHash, CRYPT_EXPORTABLE | 0x00280000, &m_hKey);
Here it's stating that we want a 40 bit key (0x00280000 = 40 << 16).
However, when calling CryptDecrypt, Windows actually uses 16 bytes for the key, taking the first 40 bits and setting the rest of the array to zero.
So, passing a 16 byte key, with just the first 40 bits set, to the CCCryptorCreate function generates the matching output to Windows.
Try using the API described in Open SSL (EVP_BytesToKey - password based encryption routine).

8 byte missing on EVP_DecryptFinal

this is my first question so please tell me if I do something wrong :).
My problem is I use
EVP_DecryptInit(&ctx1, EVP_des_ecb(), tmpkey, NULL);
EVP_DecryptUpdate(&ctx1, keysigout, &outlu ,keysigin, keysigfilelength);
EVP_DecryptFinal(&ctx1, keysigout, &outlf);
printf("DECLEN:%i",outlu + outlf);
to decrypt a binary file. The file is 248 bytes long but the printf only tells me EVP decrypted 240 bytes. keysigfilelength is 248 and should tell the update that 248 bytes need to be decrypted.
I dont understand why this doesnt work and would be happy if you can enlighten me.
Edit:
I just encrypted a file manually with the command
openssl enc -e -des-ecb -in test.txt -out test.bin -K 00a82b209cbeaf00
and it grew by 8 bytes :O. I still don't know where they come from but I don't think the general error I have in my program is caused by this.
The context of this whole problem is an information security course at my university. We got similar Tasks with different algorithms, but even someone who has done his program successfully couldnt figure out where the problem in my program is.
Is it ok to post my whole program for you?
I hope its fine to answer my own question.
EVP_DecryptUpdate(&ctx1, keysigout, &outlu ,keysigin, keysigfilelength);
EVP_DecryptFinal(&ctx1, keysigout + outlu, &outlf);
The problem was the missing outlu, DecryptFinal tried to decrypt the whole block again. When i added the outlu i got 7 byte in outlf, and it worked.
For future reference i add the whole function below.
It expects the key and iv to be one block of data.
int decrypt(const EVP_CIPHER *cipher,unsigned char *key, unsigned char *encryptedData, int encryptedLength,unsigned int * length, unsigned char ** decryptedData)
{
int decryptedLength = 0, lastDecryptLength = 0, ret;
unsigned char * iv = NULL;
EVP_CIPHER_CTX *cryptCtx = EVP_CIPHER_CTX_new();
EVP_CIPHER_CTX_init(cryptCtx);
*decryptedData = malloc (encryptedLength * sizeof(char));
if(cipher->iv_len != 0) iv = key + cipher->key_len;
EVP_DecryptInit_ex(cryptCtx, cipher, NULL, key, iv);
EVP_DecryptUpdate(cryptCtx, *decryptedData, &decryptedLength, encryptedData, encryptedLength);
ret = EVP_DecryptFinal_ex(cryptCtx, *decryptedData + decryptedLength, &lastDecryptLength);
*length = decryptedLength + lastDecryptLength;
EVP_CIPHER_CTX_free(cryptCtx);
EVP_cleanup();
return ret;
}
Because block ciphers only really want to work on an input that is a multiple of their block size, the input is normally padded to meet this requirement.
The default for many programs (including openssl enc is to use PKCS #5 padding
If the plaintext is not a multiple of 8 bytes then padding bytes are added so that it is. If it is already a multiple of 8 bytes then 8 bytes of padding are added. Thus it is entirely normal for the encrypted data to be longer than the plaintext.

Hot Patching A Function

I'm trying to hot patch an exe in memory, the source is available but I'm doing this for learning purposes. (so please no comments suggesting i modify the original source or use detours or any other libs)
Below are the functions I am having problems with.
vm_t* VM_Create( const char *module, intptr_t (*systemCalls)(intptr_t *), vmInterpret_t interpret )
{
MessageBox(NULL, L"Oh snap! We hooked VM_Create!", L"Success!", MB_OK);
return NULL;
}
void Hook_VM_Create(void)
{
DWORD dwBackup;
VirtualProtect((void*)0x00477C3E, 7, PAGE_EXECUTE_READWRITE, &dwBackup);
//Patch the original VM_Create to jump to our detoured one.
BYTE *jmp = (BYTE*)malloc(5);
uint32_t offset = 0x00477C3E - (uint32_t)&VM_Create; //find the offset of the original function from our own
memset((void*)jmp, 0xE9, 1);
memcpy((void*)(jmp+1), &offset, sizeof(offset));
memcpy((void*)0x00477C3E, jmp, 5);
free(jmp);
}
I have a function VM_Create that I want to be called instead of the original function. I have not yet written a trampoline so it crashes (as expected). However the message box does not popup that I have detoured the original VM create to my own. I believe it is the way I'm overwriting the original instructions.
I can see a few issues.
I assume that 0x00477C3E is the address of the original VM_Create function. You really should not hard code this. Use &VM_Create instead. Of course this will mean that you need to use a different name for your replacement function.
The offset is calculated incorrectly. You have the sign wrong. What's more the offset is applied to the instruction pointer at the end of the instruction and not the beginning. So you need to shift it by 5 (the size of the instruction). The offset should be a signed integer also.
Ideally, if you take into account my first point the code would look like this:
int32_t offset = (int32_t)&New_VM_Create - ((int32_t)&VM_Create+5);
Thanks to Hans Passant for fixing my own silly sign error in the original version!
If you are working on a 64 bit machine you need to do your arithmetic in 64 bits and, once you have calculated the offset, truncate it to a 32 bit offset.
Another nuance is that you should reset the memory to being read-only after having written the new JMP instruction, and call FlushInstructionCache.

Resources