CryptoApi to CommonCrypto - c

I have cryptographic code used in the Windows platform, which uses the Crypto API functions and need to convert this to using Common Crypto on OS X.
Essentially the original code is this, with error checking removed for brevity: -
CryptAcquireContext(&m_hProv, NULL, NULL, PROV_RSA_FULL, CRYPT_VERIFYCONTEXT));
CryptCreateHash(m_hProv, CALG_MD5 ,0, 0, &hHash);
CryptHashData(hHash,(LPBYTE)pszInputData, lstrlen(pszInputData)*sizeof(TCHAR), 0);
CryptDeriveKey(m_hProv, CALG_RC4, hHash, CRYPT_EXPORTABLE | 0x00280000, &m_hKey);
CryptDecrypt(m_hKey, 0, bFinal, 0, pData, pdwDataSize);
As far as I understand, this is what's happening: -
CryptAcquireContext - Get an object to handle the cryptography
CryptCreateHash - Create an MD5 hashing object
CryptHashData - Hash the input data with MD5
CryptDeriveKey, CryptDecrypt - Decode pData with RC4, using the key m_hKey
The size of pszInputData is 12 bytes and the output array of the MD5 hashed object is the same on both platforms.
To decode with RC4, I'm doing the following with Common Crypto: -
CCCryptorRef cryptor = NULL;
CCCryptorCreate(kCCDecrypt, kCCAlgorithmRC4, 0,
(void*)m_hKey.data(), m_hKey.length(), NULL, &cryptor);
char outBuffer[12];
size_t outBytes;
CCCryptorUpdate(cryptor, (void*)pData, *pdwDataSize, outBuffer, 12, &outBytes);
Testing the output (outBuffer array) from Common Crypto with an online RC4 decoder matches, so this is decoding correctly.
However, the final output from the Windows code in pData doesn't match the RC4 decoded in Common Crypto.
Are there some steps I'm missing or not understanding with the Windows Crypto API calls here; why do the outputs differ?
(Please note, I'm not looking for comments on the security or flaws in using RC4)

The problem turns out to be understanding how CryptDeriveKey uses the number of specified bits with the RC4 Decryption.
CryptDeriveKey(m_hProv, CALG_RC4, hHash, CRYPT_EXPORTABLE | 0x00280000, &m_hKey);
Here it's stating that we want a 40 bit key (0x00280000 = 40 << 16).
However, when calling CryptDecrypt, Windows actually uses 16 bytes for the key, taking the first 40 bits and setting the rest of the array to zero.
So, passing a 16 byte key, with just the first 40 bits set, to the CCCryptorCreate function generates the matching output to Windows.

Try using the API described in Open SSL (EVP_BytesToKey - password based encryption routine).

Related

Problem by Encryption & Decryption of text with EVP_des_ofb(), openSSL , C

I need to encrypt and decrypt txt-file with DES-ofb (libcrypto) using OpenSSL library, the key and Init Vector is given in one bin.file(key+iv). But after the decryption via EVP_DecryptUpdate(), decrypted text and plain text are not similar at all.
So I read plain.txt 8 bytes and a 'keyandIV.bin' files. Than I took first 8 bytes from keyandIVbuffer as a KEY for DES and the rest as IV. So I have 8 bytes key and 8 bytes IV, added '\0' at the end of both (Do I need '\0' here ? Key length must be 64 or 56 bits?).
This is my code for ercryption with DES ofb:
printf("ENCRYPTION:\n");
int howmany = 0, final1;
const EVP_CIPHER *CIPHER_TYPE = EVP_des_ofb();
EVP_CIPHER_CTX *ctx_encrypt = EVP_CIPHER_CTX_new();
EVP_CIPHER_CTX_init(ctx_encrypt);
EVP_EncryptInit(ctx_encrypt, CIPHER_TYPE, keybuf1, ivbuf1);
if(!EVP_EncryptUpdate(ctx_encrypt, ciphertextbuf1, &howmany, plaintextbuf1, plainlength1))return -1;
if(!EVP_EncryptUpdate(ctx_encrypt, ciphertextbuf1, &howmany, plaintextbuf1, plainlength1)) return -1;
EVP_EncryptFinal_ex(ctx_encrypt, ciphertextbuf1 + howmany , &final1);
EVP_CIPHER_CTX_cleanup(ctx_encrypt);
Than I took the encrypted buffer und decrypt it so:
printf("DECRYPTION:\n");
int final2;
EVP_CIPHER_CTX *ctx_decrypt = EVP_CIPHER_CTX_new();
EVP_CIPHER_CTX_init(ctx_decrypt);
EVP_DecryptInit(ctx_decrypt, CIPHER_TYPE, keybuf1, ivbuf1);
if(!EVP_DecryptUpdate(ctx_decrypt, decryptedtext, &howmany, ciphertextbuf1, strlen(ciphertextbuf1))) return -1;
if(!EVP_EncryptFinal_ex(ctx_decrypt, decryptedtext + howmany, &final2)) return -1;
EVP_CIPHER_CTX_cleanup(ctx_decrypt);
I definetly have understanding problem with DES. Maybe I did something wrong by creatimg key and IV from one file.I have seen plenty of examples but I still don't understand what I did wrong in my program.
The decryption sequence is EVP_DecryptInit_ex(), EVP_DecryptUpdate() and EVP_DecryptFinal_ex(). This follows EVP_EncryptInit_ex(), EVP_EncryptUpdate() and EVP_EncryptFinal_ex(). In your code you are calling EVP_EncryptFinal_ex() to decrypt, so obviously that's not going to work. Also, if something went wrong during an operation an error code should have been printed out to stderr.

AES Encryption -Key Generation with OpenSSL

As a reference and as continuation to the post:
how to use OpenSSL to decrypt Java AES-encrypted data?
I have the following questions.
I am using OpenSSL libs and programming in C for encrypting data in aes-cbc-128.
I am given any input binary data and I have to encrypt this.
I learn that Java has a CipherParameters interface to set IV and KeyParameters too.
Is there a way to generate IV and a key using openSSL? In short how could one use in a C program to call the random generator of openSSL for these purposes. Can any of you provide some docs/examples/links on this?
Thanks
An AES key, and an IV for symmetric encryption, are just bunchs of random bytes. So any cryptographically strong random number generator will do the trick. OpenSSL provides such a random number generator (which itself feeds on whatever the operating system provides, e.g. CryptGenRandom() on Windows or /dev/random and /dev/urandom on Linux). The function is RAND_bytes(). So the code would look like this:
#include <openssl/rand.h>
/* ... */
unsigned char key[16], iv[16];
if (!RAND_bytes(key, sizeof key)) {
/* OpenSSL reports a failure, act accordingly */
}
if (!RAND_bytes(iv, sizeof iv)) {
/* OpenSSL reports a failure, act accordingly */
}
Assuming AES-128:
unsigned char key[16];
RAND_bytes(key, sizeof(key));
unsigned char iv[16];
RAND_bytes(iv, sizeof(iv));
The random generator needs to be seeded before using one of those.

How to calculate X.509 certificate's SHA-1 fingerprint?

I'm trying to implement an X.509 certificate generator from scratch (I know about the existing ones, but I need yet another one). What I cannot understand is how to calculate the SHA-1 (or any other) fingerprint of the certificate.
The RFC5280 says that the input to the signature function is the DER-encoded tbsCertificate field. Unfortunately, the hash that I calculate differs from the one produced by OpenSSL. Here's a step-by-step example.
Generate a certificate using OpenSSL's x509 tool (in a binary DER form, not the ASCII PEM)
Calculate its SHA-1 hash using openssl x509 -fingerprint
Extract the TBS field using dd (or anything else) and store it in a separate file; calculate its hash using the sha1sum utility
Now, the hashes I get at steps 2 and 3 are different. Can someone please give me a hint what I may be doing wrong?
Ok, so it turned out that the fingerprint calculated by OpenSSL is simply a hash over the whole certificate (in its DER binary encoding, not the ASCII PEM one!), not only the TBS part, as I thought.
For anyone who cares about calculating certificate's digest, it is done in a different way: the hash is calculated over the DER-encoded (again, not the PEM string) TBS part only, including its ASN.1 header (the ID 0x30 == ASN1_SEQUENCE | ASN1_CONSTRUCTED and the length field). Please note that the certificate's ASN.1 header is not taken into account.
The finger print is similar to term "Thumbprint" in .net. Below code snippet should help you to compute finger print :
public String generateFingerPrint(X509Certificate cert) throws CertificateEncodingException,NoSuchAlgorithmException {
MessageDigest digest = MessageDigest.getInstance("SHA-1");
byte[] hash = digest.digest(cert.getEncoded[]);
final char delimiter = ':';
// Calculate the number of characters in our fingerprint
// ('# of bytes' * 2) chars + ('# of bytes' - 1) chars for delimiters
final int len = hash.length * 2 + hash.length - 1;
// Typically SHA-1 algorithm produces 20 bytes, i.e. len should be 59
StringBuilder fingerprint = new StringBuilder(len);
for (int i = 0; i < hash.length; i++) {
// Step 1: unsigned byte
hash[i] &= 0xff;
// Steps 2 & 3: byte to hex in two chars
// Lower cased 'x' at '%02x' enforces lower cased char for hex value!
fingerprint.append(String.format("%02x", hash[i]));
// Step 4: put delimiter
if (i < hash.length - 1) {
fingerprint.append(delimiter);
}
}
return fingerprint.toString();
}

Using an SHA1 with Microsoft CAPI

I have an SHA1 hash and I need to sign it. The CryptSignHash() method requires a HCRYPTHASH handle for signing. I create it and as I have the actual hash value already then set it:
CryptCreateHash(cryptoProvider, CALG_SHA1, 0, 0, &hash);
CryptSetHashParam(hash, HP_HASHVAL, hashBytes, 0);
The hashBytes is an array of 20 bytes.
However the problem is that the signature produced from this HCRYPTHASH handle is incorrect. I traced the problem down to the fact that CAPI actually doesn't use all 20 bytes from my hashBytes array. For some reason it thinks that SHA1 is only 4 bytes.
To verify this I wrote this small program:
HCRYPTPROV cryptoProvider;
CryptAcquireContext(&cryptoProvider, NULL, NULL, PROV_RSA_FULL, 0);
HCRYPTHASH hash;
HCRYPTKEY keyForHash;
CryptCreateHash(cryptoProvider, CALG_SHA1, keyForHash, 0, &hash);
DWORD hashLength;
CryptGetHashParam(hash, HP_HASHSIZE, NULL, &hashLength, 0);
printf("hashLength: %d\n", hashLength);
And this prints out hashLength: 4 !
Can anyone explain what I am doing wrong or why Microsoft CAPI thinks that SHA1 is 4 bytes (32 bits) instead of 20 bytes (160 bits).
There are a small error in your code. Instead of
DWORD hashLength;
CryptGetHashParam(hash, HP_HASHSIZE, NULL, &hashLength, 0);
printf("hashLength: %d\n", hashLength);
you should use
DWORD hashLength, hashSize;
hashLength = sizeof(DWORD)
CryptGetHashParam(hash, HP_HASHSIZE, (PBYTE)&hashSize, &hashLength, 0);
printf("hashSize: %d\n", hashSize);
then you will receive 20 as expected.
The Usage of CryptSignHash after CryptSetHashParam must also work. See remark at the end of the description of CryptSetHashParam function at http://msdn.microsoft.com/en-us/library/aa380270(VS.85).aspx. I suppose you just made the same error as with CryptGetHashParam(..., HP_HASHSIZE, ...) during retrieving of the result of signing. Compare your code with the code from the description of CryptSignHash function http://msdn.microsoft.com/en-us/library/aa380280(VS.85).aspx.
I don't think you can use CryptCreateHash in that way. From MSDN:
"The CryptCreateHash function
initiates the hashing of a stream of
data."
In other words, it looks like you can't instantiate a hash context in any way other than empty (and then by having it hash your input data).
How do you have the hash at present - a byte array? If so you probably just want to sign that array; I'd look into CryptSignMessage or CryptSignMessageWithKey as likely to do the job.
(I'm guessing, but what you're seeing may be explained by the output hash length not being set up until after the hash context has done some work.)

How do I convert a G.726 ADPCM signal into a PCM signal?

I usually look to SoX or Window's built in audio libraries for this stuff, but it appears that neither have G.726 codecs.
So I have a sequence of bytes that I know are encoded as G.726 although the bit-rate and whether it is mu-law or A-law is not known at this time (experimentation will determine those parameters), and I need to decode them into a normal PCM signal.
So I downloaded the reference implementation from the ITU-T (ITU-T Recommendation G.191) but I'm kind of confused on how to use the G726_decode function. According to the documentation inp_buf and out_buf need to have the same length smpno and both buffers are 16-bit buffers. This seems to me like a step is missing; otherwise no compression is accomplished by using G.726. According to the Wikipedia page on G.726 sample size depends on bit rate (from 2 to 5 bits). Am I supposed to do the decompression into samples myself? So if I assume maximum compression (2 bit samples) then each byte will produce 4 samples.
Example:
char b = /* read the code from input */
short inp[4], output[4];
inp[0] = b & 0x0003;
inp[1] = b & 0x000C >> 2;
inp[2] = (b & 0x0030) >> 4;
inp[3] = (b & 0x00C0) >> 6;
G726_state state;
memset(&state, 0, sizeof(G726_state));
G726_decode(inp, output, 4, "u", 2, 1, &state);
/* ouput now contains 4 PCM samples */
Or am I missing something completely?
Looks like ffmpeg actually isn't able to do this, as I thought it surely would be able to... however, while I was googling I did find this post to the ffmpeg mailing list which offers a solution.
Basically, there is a separate program called g72x++ which seems to be able to decode the audio to raw PCM for you.

Resources