How to get the size of the entire X509 certificate - c

I am reading a .crt certificate which I generate using openSSL. I have the certificate in my C program in a X509 file. I would like to know the size of the whole certificate that I have just read. How can this be there? Is there a specific function that returns the size of the certificate?
Thanks
Best Regards

For sending certificate over the network, I recommend using DER format. The reason is that PEM is Base64-encoded DER plus some additional text (prefix/suffix).
To estimate the size, you actually need to encode the certificate (this is DER):
size_t get_length(X509 *cer)
{
int len;
len = i2d_X509(cer, NULL);
return len > 0 ? len : 0;
}
For PEM it is trickier:
unsigned char *data;
BIO *bio = BIO_new(BIO_s_mem());
PEM_write_bio_X509(bio, cer);
len = BIO_get_mem_data(bio, &data);
// here - data is a pointer to encoded data, len - length of data.
BIO_free(bio); // free _after_ you no longer need data

Related

OPENSSL 1.1.1d signature verification faills, RSASSA-PSS signature scheme

I am trying to generate an RSASSA-PSS signature with openssl 1.1.1d.
The code below does generate some signature, different each time I run it, as expected for PSS scheme.
However, whenever I try to verify the signature online, it fails (I am using this site to do the verification: https://8gwifi.org/RSAFunctionality?rsasignverifyfunctions=rsasignverifyfunctions&keysize=2048).
Any expert here who could see if anything is missing in this source code?
int main()
{
EVP_MD_CTX *mdctx = NULL;
int ret = 0;
unsigned char *sig;
EVP_PKEY *key = NULL;
EVP_PKEY_CTX *ctx = NULL;
char *msg = "This is a nice message to be signed.";
int len;
size_t siglen;
FILE *fp;
char sig64[512];
unsigned char md[SHA256_DIGEST_LENGTH];
SHA256_CTX sha256;
SHA256_Init(&sha256);
SHA256_Update(&sha256, msg, strlen(msg));
SHA256_Final(md, &sha256);
sig = NULL;
len = strlen(msg);
fp = fopen("MYPRIVATEKEY.KEY", "r");
key = EVP_PKEY_new();
PEM_read_PrivateKey(fp, &key, NULL, NULL);
ctx = EVP_PKEY_CTX_new(key, NULL);
if (EVP_PKEY_sign_init(ctx) <= 0)
goto err;
if (EVP_PKEY_CTX_set_rsa_padding(ctx, RSA_PKCS1_PSS_PADDING) <= 0)
goto err;
if (EVP_PKEY_CTX_set_signature_md(ctx, EVP_sha256()) <= 0)
goto err;
/* Determine buffer length */
if (EVP_PKEY_sign(ctx, NULL, &siglen, md, SHA256_DIGEST_LENGTH) <= 0)
goto err;
sig = OPENSSL_malloc(siglen);
if (!sig)
goto err;
if (EVP_PKEY_sign(ctx, sig, &siglen, md, SHA256_DIGEST_LENGTH) <= 0)
goto err;
/* Signature is siglen bytes written to buffer sig */
/* Success */
ret = 1;
Encode_B64(sig, siglen, sig64);
printf("Signature: %s\n", sig64);
err:
if(ret != 1)
{
/* Do some error handling */
}
/* Clean up */
if(mdctx) EVP_MD_CTX_free(mdctx);
EVP_PKEY_free(key);
EVP_PKEY_CTX_free(ctx);
if(sig && !ret) OPENSSL_free(sig);
return 0;
}
The PSS parameters must match on both sides for successful verification:
the PSS digest: this is set to SHA256 in the C code with EVP_PKEY_CTX_set_signature_md(ctx, EVP_sha256()).
the MGF1 digest: This corresponds to the PSS digest by default, so it is also SHA256 here. If a different MGF1 digest is to be set, this is possible with EVP_PKEY_CTX_set_rsa_mgf1_md().
the salt length: this is set by OpenSSL by default equal to the maximum salt length (modulus - output length of the digest - 2, in bytes), which thus also applies here. The salt length can be set explicitly with EVP_PKEY_CTX_set_rsa_pss_saltlen() with the values RSA_PSS_SALTLEN_MAX (maximum salt length, default) and RSA_PSS_SALTLEN_DIGEST (output length of the PSS digest).
For completeness: Actually, more generally, the mask generation function must be specified, but by default MGF1 is applied, so that only the digest used by this function must be set (see point 2). In addition there is the trailer field number, which is not critical here, since both sides take the usual value of 1 by default.
The website, in contrast, uses different values for the salt length and the two digests:
RSASSA-PSS and SHA1WithRSA/PSS: SHA1 for the PSS and the MGF1 digest, salt length = output length of the digest (20 bytes)
SHA224WithRSA/PSS: SHA224 for the PSS and MGF1 digest, salt length = output length of the digest (28 bytes)
SHA384WithRSA/PSS: SHA384 for the PSS and the MGF1 digest, salt length = output length of the digest (48 bytes)
None of the PSS settings use SHA256 or the maximum salt length.
Due to the different PSS parameters, the verification fails.
However, verification is possible e.g. with the OpenSSL command line tool (e.g. with openssl pkeyutl):
openssl pkeyutl -verify -in <message hash file> -inkey <public key file> -sigfile <signature file> -pubin -pkeyopt rsa_padding_mode:pss -pkeyopt digest:sha256
The switches -pkeyopt rsa_mgf1_md:sha256 -pkeyopt rsa_pss_saltlen:max are set by default in the above statement.
The key file contains the PEM encoded X.509/SPKI key, the other two files contain the raw (i.e. not Base64 encoded) data of message hash and signature.

Openssl in Windows Kernel Mode

Is there a way I can use the OpenSSL library in windows kernel mode? I want to make a windows filter driver that intercepts the read and write operations and encrypts and decrypts the buffers. I already made a driver that can replace the buffers with some arbitrary content but now I need to encrypt the original content. I tried to include the OpenSSL dll-s in the project and i can compile and install the driver but when I try to start it I get this error
System error 2 has occurred.
The system cannot find the file specified.
This is the code that does the encryption. I know its not safe to use a static key but its just for testing.
void encrypt_aes_ctr(unsigned char* key, unsigned char* iv, unsigned char* data, unsigned char* out_data,int in_len,int*out_len)
{
int len;
EVP_CIPHER_CTX* ctx = EVP_CIPHER_CTX_new();;
EVP_EncryptInit_ex(ctx, EVP_aes_256_ctr(), NULL, key, iv);
EVP_EncryptUpdate(ctx, out_data, out_len, data, in_len);
EVP_EncryptFinal_ex(ctx, out_data + *out_len, &len);
*out_len += len;
}
And this is the call I make in the SwapPreWriteBuffers function
unsigned char key[32] = { 1,2,3,4,5,6,7,8,9,10,11,12,1,2,3,4,5,6,7,8,9,10,11,12,1,2,3,4,5,6,7,8 };
unsigned char iv[16] = { 1,2,3,4,5,6,7,8,9,10,11,12,1,2,3,4 };
int len;
encrypt_aes_ctr(key, iv, origBuf, newBuf, writeLen, &len);
You should use CNG API which is Microsoft Standard API For Crypto.
for example for Encrypt
Here is Code Example that use BCrypt in Kernel (Random)

Different encryption results between platforms, using OpenSSL

I'm working on a piece of cross-platform (Windows and Mac OS X) code in C that needs to encrypt / decrypt blobs using AES-256 with CBC and a blocksize of 128 bits. Among various libraries and APIs I've chosen OpenSSL.
This piece of code will then upload the blob using a multipart-form PUT to a server which then decrypts it using the same settings in .NET's crypto framework (Aes, CryptoStream, etc...).
The problem I'm facing is that the server decryption works fine when the local encryption is done on Windows but it fails when the encryption is done on Mac OS X - the server throws a "Padding is invalid and cannot be removed exception".
I've looked at this from many perspectives:
I verified that the transportation is correct - the byte array received on the server's decrypt method is exactly the same that is sent from Mac OS X and Windows
The actual content of the encrypted blob, for the same key, is different between Windows and Mac OS X. I tested this using a hardcoded key and run this patch on Windows and Mac OS X for the same blob
I'm sure the padding the correct, since it is taken care of by OpenSSL and since the same code works for Windows. Even so, I tried implementing the padding scheme as it is in Microsoft's reference source for .NET but still, no go
I verified that the IV is the same for Windows and Mac OS X (I thought maybe there was a problem with some of the special characters such as ETB that appear in the IV, but there wasn't)
I've tried LibreSSL and mbedtls, with no positive results. In mbedtls I also had to implement padding because, as far as I know, padding is the responsibility of the API's user
I've been at this problem for almost two weeks now and I'm starting to pull my (ever scarce) hair out
As a frame of reference, I'll post the C client's code for encrypting and the server's C# code for decrypting. Some minor details on the server side will be omitted (they don't interfere with the crypto code).
Client:
/*++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++*/
void
__setup_aes(EVP_CIPHER_CTX *ctx, const char *key, qvr_bool encrypt)
{
static const char *iv = ""; /* for security reasons, the actual IV is omitted... */
if (encrypt)
EVP_EncryptInit(ctx, EVP_aes_256_cbc(), key, iv);
else
EVP_DecryptInit(ctx, EVP_aes_256_cbc(), key, iv);
}
/*++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++*/
void
__encrypt(void *buf,
size_t buflen,
const char *key,
unsigned char **outbuf,
size_t *outlen)
{
EVP_CIPHER_CTX ctx;
int blocklen = 0;
int finallen = 0;
int remainder = 0;
__setup_aes(&ctx, key, QVR_TRUE);
EVP_CIPHER *c = ctx.cipher;
blocklen = EVP_CIPHER_CTX_block_size(&ctx);
//*outbuf = (unsigned char *) malloc((buflen + blocklen - 1) / blocklen * blocklen);
remainder = buflen % blocklen;
*outlen = remainder == 0 ? buflen : buflen + blocklen - remainder;
*outbuf = (unsigned char *) calloc(*outlen, sizeof(unsigned char));
EVP_EncryptUpdate(&ctx, *outbuf, outlen, buf, buflen);
EVP_EncryptFinal_ex(&ctx, *outbuf + *outlen, &finallen);
EVP_CIPHER_CTX_cleanup(&ctx);
//*outlen += finallen;
}
Server:
static Byte[] Decrypt(byte[] input, byte[] key, byte[] iv)
{
try
{
// Check arguments.
if (input == null || input.Length <= 0)
throw new ArgumentNullException("input");
if (key == null || key.Length <= 0)
throw new ArgumentNullException("key");
if (iv == null || iv.Length <= 0)
throw new ArgumentNullException("iv");
byte[] unprotected;
using (var encryptor = Aes.Create())
{
encryptor.Key = key;
encryptor.IV = iv;
using (var msInput = new MemoryStream(input))
{
msInput.Position = 0;
using (
var cs = new CryptoStream(msInput, encryptor.CreateDecryptor(),
CryptoStreamMode.Read))
using (var data = new BinaryReader(cs))
using (var outStream = new MemoryStream())
{
byte[] buf = new byte[2048];
int bytes = 0;
while ((bytes = data.Read(buf, 0, buf.Length)) != 0)
outStream.Write(buf, 0, bytes);
return outStream.ToArray();
}
}
}
}
catch (Exception ex)
{
throw ex;
}
}
Does anyone have any clue as to why this could possibly be happening? For reference, this is the .NET method from Microsoft's reference source .sln that (I think) does the decryption: https://gist.github.com/Metaluim/fcf9a4f1012fdeb2a44f#file-rijndaelmanagedtransform-cs
OpenSSL version differences are messy. First I suggest you explicitly force and veryify the key lengths, keys, IVs and encryption modes on both sides. I don't see that in the code. Then I would suggest you decrypt on the server side without padding. This will always succeed, and then you can inspect the last block whether it is what you expect.
Do this with the Windows-Encryption and MacOS-Encryption variant and you will see a difference, most likely in the padding.
The outlen padding in the C++ code looks odd. Encrypting a 16 byte long plaintext results in 32 bytes of ciphertext, but you only provide a 16 byte long buffer. This will not work. You will write out of bounds. Maybe it works just by chance on Windows because of a more generous memory layout and fails on MacOS.
AES padding scheme has been changed between OpenSSL versions 0.9.8* and 1.0.1* (at least between 0.9.8r and 1.0.1j). If two of your modules uses these different versions of OpenSSL then this could be the reason for your problem. To verify this first check OpenSSL versions. If you hit the described case you may consider on aligning the padding scheme to be the same.

aes-gcm using libgcrypt api in C

I'm playing with libgcrypt (v1.6.1 on Gentoo x64) and i've already implemented (and tested thorugh the AEs test vectors) aes256-cbc and aes256-ctr. Now i am looking at aes256-gcm but i have some doubt about the workflow. Below there is a skeleton of a simple encryption program:
int main(void){
unsigned char TEST_KEY[] = {0x60,0x3d,0xeb,0x10,0x15,0xca,0x71,0xbe,0x2b,0x73,0xae,0xf0,0x85,0x7d,0x77,0x81,0x1f,0x35,0x2c,0x07,0x3b,0x61,0x08,0xd7,0x2d,0x98,0x10,0xa3,0x09,0x14,0xdf,0xf4};
unsigned char TEST_IV[] = {0x00,0x01,0x02,0x03,0x04,0x05,0x06,0x07,0x08,0x09,0x0a,0x0b,0x0c,0x0d,0x0e,0x0f};
unsigned char TEST_PLAINTEXT_1[] = {0x6b,0xc1,0xbe,0xe2,0x2e,0x40,0x9f,0x96,0xe9,0x3d,0x7e,0x11,0x73,0x93,0x17,0x2a};
unsigned char cipher[16] = {0};
int algo = -1, i;
const char *name = "aes256";
algo = gcry_cipher_map_name(name);
gcry_cipher_hd_t hd;
gcry_cipher_open(&hd, algo, GCRY_CIPHER_MODE_GCM, 0);
gcry_cipher_setkey(hd, TEST_KEY, 32);
gcry_cipher_setiv(hd, TEST_IV, 16);
gcry_cipher_encrypt(hd, cipher, 16, TEST_PLAINTEXT_1, 16);
char out[33];
for(i=0;i<16;i++){
sprintf(out+(i*2), "%02x", cipher[i]);
}
out[32] = '\0';
printf("%s\n", out);
gcry_cipher_close(hd);
return 0;
}
In GCM mode there want also these instruction:
gcry_cipher_authenticate (gcry cipher hd t h , const void * abuf , size t abuflen )
gcry_error_t gcry_cipher_gettag (gcry cipher hd t h , void * tag , size t taglen )
So the correct workflow of the encryption program is:
gcry_cipher_authenticate
gcry_cipher_encrypt
gcry_cipher_gettag
But what i haven't undestood is:
abuf is like a salt? (so have i to generate it using gcry_create_nonce or similar?)
If i want to encrypt a file, void *tag is what i have to write to the outfile?
1) gcry_cipher_authenticate is for supporting authenticated encryption with associated data. abuf is data that you need to authenticate but do not need to encrypt. For example, if you are sending a packet, you might want to encrypt the body, but you must send the header unencrypted for the packet to be delivered. The tag generated by the cipher will provide integrity for both the encrypted data and the data sent in plain.
2) The tag is used after decryption to make sure that the data has not been tampered with. You append the tag to the encrypted text. Note, that it is computed on encrypted data and associated (unencrypted) data, so you will need both when decrypting.
You can check these documents for more information on GCM:
http://csrc.nist.gov/groups/ST/toolkit/BCM/documents/proposedmodes/gcm/gcm-spec.pdf
http://csrc.nist.gov/publications/nistpubs/800-38D/SP-800-38D.pdf
Also, you can probably get faster answers to cryptography questions like this on http://crypto.stackexchange.com.

OpenSSL RSA encryption with no padding fails to encrypt correctly

We're trying to perform RSA encryption using the "RSA_public_encrypt()" method (openSSL on Symbian), but we're not quite succeeding.
The encryption itself succeeds, but the encrypted text (which we try to match to a hash) isn't what it should be
(on other platforms, we checked this were we now it is working correctly and we get different values here).
We think this is probably due to the input which isn't provided in the correct format to the "RSA_public_encrypt()" method.
The code:
#define RSA_E 0x10001L
void Authenticator::Test(TDesC8& aSignature, TDesC8& aMessage) {
RSA *rsa;
char *plainkey;
char *cipherkey;
int buffSize;
// create the public key
BIGNUM * bn_mod = BN_new();
BIGNUM * bn_exp = BN_new();
const char* modulus2 = "137...859"; // 309 digits
// Convert to BIGNUM
int len = BN_dec2bn(&bn_mod, modulus2); // with 309 digits -> gives maxSize 128 (OK!)
BN_set_word(bn_exp, RSA_E);
rsa = RSA_new(); // Create a new RSA key
rsa->n = bn_mod; // Assign in the values
rsa->e = bn_exp;
rsa->d = NULL;
rsa->p = NULL;
rsa->q = NULL;
int maxSize = RSA_size(rsa); // Find the length of the cipher text
// maxSize is 128 bytes (1024 bits)
// session key received from server side, what format should this be in ???
plainkey = "105...205"; // 309 digits
cipherkey = (char *)malloc(maxSize);
memset(cipherkey, 0, maxSize);
if (rsa) {
buffSize = RSA_public_encrypt(maxSize, (unsigned char *) plainkey, (unsigned char *) cipherkey, rsa, RSA_NO_PADDING);
unsigned long err = ERR_get_error();
free(cipherkey);
free(plainkey);
}
/* Free */
if (rsa)
RSA_free(rsa);
}
We have a plain key which is 309 characters, but the maxSize is 128 bytes (RSA_NO_PADDING, so input has to be equal to modulus size),
so only the first 128 characters are encrypted (not really sure if this is correct?), which is probably why
our encrypted text isn't what it should be. Or is there something else we don't do correctly ?
What is then the correct way of transforming our plain text so all data of 'plainkey' is encrypted ?
We also tried this with a hexadecimal string '9603cab...' which is 256 characters long, but didn't succeed.
So if correctly converted into bytes (128) could this be used and if yes, how would that conversion work?
Any help is greatly appreciated, thanks!
Take a look at this file, maybe it helps: https://gist.github.com/850935

Resources