OpenSSL & C - Hash Passwords w/ SHA256 or SHA512 - c

I've tried my best reading over the docs but they seem very sparing in information (maybe I'm looking in the wrong place?)
I'm trying to create a password hasher in C using OpenSSL lib in which the program can be called and passed arguments such as the ending length of the hashed password, salt length, and the HMAC used (SHA256 or 512). There just isn't a lot of info on how to utilize the API to do this.
The biggest problem I see is that there is a function called PKCS5_PBKDF2_HMAC_SHA1, but I can't find one similar for 256 or 512.. Is only SHA1 available via OpenSSL API?
Any guidance is much appreciated.

You can use PKCS5_PBKDF2_HMAC, which allows you to target a specific digest algorithm.
int PKCS5_PBKDF2_HMAC(const char *pass, int passlen,
const unsigned char *salt, int saltlen,
int iter, const EVP_MD *digest, // <<==== HERE
int keylen, unsigned char *out);
A simple example appears below, which generates a random salt, then creates a PBK from "password", the generated salt, and EVP_sha256()
#include <openssl/evp.h>
#include <openssl/rand.h>
#include <openssl/bio.h>
int main(int argc, char *argv[])
{
int iter = 1007;
unsigned char salt[32] = {0};
RAND_bytes(salt, sizeof(salt));
unsigned char key[32] = {0};
PKCS5_PBKDF2_HMAC("password", 8,
salt, sizeof(salt),
iter, EVP_sha256(),
sizeof(key), key);
BIO *bio = BIO_new_fp(stdout, BIO_NOCLOSE);
BIO_dump(bio, (const char*)salt, sizeof(salt));
BIO_dump(bio, (const char*)key, sizeof(key));
BIO_free(bio);
}
Output (varies)
0000 - a7 ca ac f4 43 b0 2d 48-2b f6 d5 67 7e d2 5c b4 ....C.-H+..g~.\.
0010 - c5 82 1d 4d b1 00 cd 1e-85 91 77 4c 32 3e f3 c8 ...M......wL2>..
0000 - 48 8f be 5a e9 1c 9e 11-d8 95 cb ed 6d 6f 36 a2 H..Z........mo6.
0010 - 38 e6 db 95 e1 d7 a6 c0-8a 2f 3a f6 e1 74 e9 b9 8......../:..t..

Related

CTR-AES256 Encrypt does not match OpenSSL -aes-256-ctr

My problem is that I cannot get the AES 256 CTR output from the C code below to match the output from the OpenSSL command below.
The C code produces this:
5f b7 18 d1 28 62 7f 50 35 ba e9 67 a7 17 ab 22
f9 e4 09 ce 23 26 7b 93 82 02 d3 87 eb 01 26 ac
96 2c 01 8c c8 af f3 de a4 18 7f 29 46 00 2e 00
The OpenSSL command line produces this:
5f b7 18 d1 28 62 7f 50 35 ba e9 67 a7 17 ab 22
3c 01 11 bd 39 14 74 76 31 57 a6 53 f9 00 09 b4
6f a9 49 bc 6d 00 77 24 2d ef b9 c4
Notice the first 16 bytes are the same because the nonceIV was the same, however, when the nonceIV is updated on the next iteration, then XOR'd with the plaintext, the next 16 bytes differ and so on...?
I cannot understand why that happens? Anyone know why the hex codes are different after the first 16 byte chunk?
Disclaimer: I'm no C expert.
Thanks!!
Fox.txt
The quick brown fox jumped over the lazy dog
Then run the following OpenSSL command to create foxy.exe
openssl enc -aes-256-ctr -in fox.txt -out foxy.exe -K 603deb1015ca71be2b73aef0857d77811f352c073b6108d72d9810a30914dff4 -iv f0f1f2f3f4f5f6f7f8f9fafbfcfdfeff -nosalt -nopad -p
Here's what foxy.exe contains:
5f b7 18 d1 28 62 7f 50 35 ba e9 67 a7 17 ab 22
3c 01 11 bd 39 14 74 76 31 57 a6 53 f9 00 09 b4
6f a9 49 bc 6d 00 77 24 2d ef b9 c4
Here's the code.
#include <Windows.h>
// What is AES CTR
//
// AES - CTR (counter) mode is another popular symmetric encryption algorithm.
//
// It is advantageous because of a few features :
// 1. The data size does not have to be multiple of 16 bytes.
// 2. The encryption or decryption for all blocks of the data can happen in parallel, allowing faster implementation.
// 3. Encryption and decryption use identical implementation.
//
// Very important note : choice of initial counter is critical to the security of CTR mode.
// The requirement is that the same counter and AES key combination can never to used to encrypt more than more one 16 - byte block.
// Notes
// -----
// * CTR mode does not require padding to block boundaries.
//
// * The IV size of AES is 16 bytes.
//
// * CTR mode doesn't need separate encrypt and decrypt method. Encryption key can be set once.
//
// * AES is a block cipher : it takes as input a 16 byte plaintext block,
// a secret key (16, 24 or 32 bytes) and outputs another 16 byte ciphertext block.
//
// References
// ----------
// https://en.wikipedia.org/wiki/Block_cipher_mode_of_operation#Counter_.28CTR.29
// https://www.cryptopp.com/wiki/CTR_Mode#Counter_Increment
// https://modexp.wordpress.com/2016/03/10/windows-ctr-mode-with-crypto-api/
// https://msdn.microsoft.com/en-us/library/windows/desktop/jj650836(v=vs.85).aspx
// http://www.cryptogrium.com/aes-ctr.html
// http://www.bierkandt.org/encryption/symmetric_encryption.php
#define IV_SIZE 16
#define AES_BLOCK_SIZE 16
typedef struct _key_hdr_t {
PUBLICKEYSTRUC hdr; // Indicates the type of BLOB and the algorithm that the key uses.
DWORD len; // The size, in bytes, of the key material.
char key[32]; // The key material.
} key_hdr;
// NIST specifies two types of counters.
//
// First is a counter which is made up of a nonce and counter.
// The nonce is random, and the remaining bytes are counter bytes (which are incremented).
// For example, a 16 byte block cipher might use the high 8 bytes as a nonce, and the low 8 bytes as a counter.
//
// Second is a counter block, where all bytes are counter bytes and can be incremented as carries are generated.
// For example, in a 16 byte block cipher, all 16 bytes are counter bytes.
//
// This uses the second method, which means the entire byte block is treated as counter bytes.
void IncrementCounterByOne(char *inout)
{
int i;
for (i = 16 - 1; i >= 0; i--) {
inout[i]++;
if (inout[i]) {
break;
}
}
}
void XOR(char *plaintext, char *ciphertext, int plaintext_len)
{
int i;
for (i = 0; i < plaintext_len; i++)
{
plaintext[i] ^= ciphertext[i];
}
}
unsigned int GetAlgorithmIdentifier(unsigned int aeskeylenbits)
{
switch (aeskeylenbits)
{
case 128:
return CALG_AES_128;
case 192:
return CALG_AES_192;
case 256:
return CALG_AES_256;
default:
return 0;
}
}
unsigned int GetKeyLengthBytes(unsigned int aeskeylenbits)
{
return aeskeylenbits / 8;
}
void SetKeyData(key_hdr *key, unsigned int aeskeylenbits, char *pKey)
{
key->hdr.bType = PLAINTEXTKEYBLOB;
key->hdr.bVersion = CUR_BLOB_VERSION;
key->hdr.reserved = 0;
key->hdr.aiKeyAlg = GetAlgorithmIdentifier(aeskeylenbits);
key->len = GetKeyLengthBytes(aeskeylenbits);
memmove(key->key, pKey, key->len);
}
// point = pointer to the start of the plaintext, extent is the size (44 bytes)
void __stdcall AESCTR(char *point, int extent, char *pKey, char *pIV, unsigned int aeskeylenbits, char *bufOut)
{
HCRYPTPROV hProv;
HCRYPTKEY hSession;
key_hdr key;
DWORD IV_len;
div_t aesblocks;
char nonceIV[64];
char tIV[64];
char *bufIn;
bufIn = point;
memmove(nonceIV, pIV, IV_SIZE);
SetKeyData(&key, aeskeylenbits, pKey);
CryptAcquireContext(&hProv, NULL, NULL, PROV_RSA_AES, CRYPT_VERIFYCONTEXT | CRYPT_SILENT);
CryptImportKey(hProv, (PBYTE)&key, sizeof(key), 0, CRYPT_NO_SALT, &hSession);
aesblocks = div(extent, AES_BLOCK_SIZE);
while (aesblocks.quot != 0)
{
IV_len = IV_SIZE;
memmove(tIV, nonceIV, IV_SIZE);
CryptEncrypt(hSession, 0, FALSE, 0, (BYTE *)tIV, &IV_len, sizeof(tIV));
XOR(bufIn, tIV, AES_BLOCK_SIZE);
IncrementCounterByOne(nonceIV);
bufIn += AES_BLOCK_SIZE;
aesblocks.quot--;
}
if (aesblocks.rem != 0)
{
memmove(tIV, nonceIV, IV_SIZE);
CryptEncrypt(hSession, 0, TRUE, 0, (BYTE *)tIV, &IV_len, sizeof(tIV));
XOR(bufIn, tIV, aesblocks.rem);
}
memmove(bufOut, point, extent);
CryptDestroyKey(hSession);
CryptReleaseContext(hProv, 0);
}
I was able to get this working by the suggested pseudocode on the M$ CryptEncrypt() remarks section https://msdn.microsoft.com/en-us/library/windows/desktop/aa379924(v=vs.85).aspx:
// Set the IV for the original key. Do not use the original key for
// encryption or decryption after doing this because the key's
// feedback register will get modified and you cannot change it.
CryptSetKeyParam(hOriginalKey, KP_IV, newIV)
while(block = NextBlock())
{
// Create a duplicate of the original key. This causes the
// original key's IV to be copied into the duplicate key's
// feedback register.
hDuplicateKey = CryptDuplicateKey(hOriginalKey)
// Encrypt the block with the duplicate key.
CryptEncrypt(hDuplicateKey, block)
// Destroy the duplicate key. Its feedback register has been
// modified by the CryptEncrypt function, so it cannot be used
// again. It will be re-duplicated in the next iteration of the
// loop.
CryptDestroyKey(hDuplicateKey)
}
Here's the updated code with the two new lines added:
HCRYPTKEY hDuplicateKey;
boolean final;
while (aesblocks.quot != 0)
{
CryptDuplicateKey(hOriginalKey, NULL, 0, &hDuplicateKey);
IV_len = IV_SIZE;
memmove(tIV, nonceIV, IV_len);
final = (aesblocks.quot == 1 && aesblocks.rem == 0) ? TRUE : FALSE;
CryptEncrypt(hDuplicateKey, 0, final, 0, (BYTE *)tIV, &IV_len, sizeof(tIV));
XOR(bufIn, tIV, AES_BLOCK_SIZE);
IncrementCounterByOne(nonceIV);
bufIn += AES_BLOCK_SIZE;
aesblocks.quot--;
CryptDestroyKey(hDuplicateKey);
}
if (aesblocks.rem != 0)
{
CryptDuplicateKey(hOriginalKey, NULL, 0, &hDuplicateKey);
final = TRUE;
memmove(tIV, nonceIV, IV_SIZE);
CryptEncrypt(hDuplicateKey, 0, final, 0, (BYTE *)tIV, &IV_len, sizeof(tIV));
XOR(bufIn, tIV, aesblocks.rem);
CryptDestroyKey(hDuplicateKey);
}
I'm not familiar with the Microsoft APIs, but I believe that CryptEncrypt() uses CBC mode by default - so the output from the first block of encryption is automatically being fed into the input for the second block. You are building CTR mode yourself form scratch (which incidentally is generally not an advisable thing to do - you should use the capabilities of crypto libraries rather than "roll your own" crypto). To get the expected output you probably need to get CryptEncrypt to use AES in ECB mode - which I believe can be done using CryptptSetKeyParam (https://msdn.microsoft.com/en-us/library/aa380272.aspx) and setting KP_MODE to CRYPT_MODE_ECB.
Make sure your input file doesn't contain any extra characters like new line etc. Openssl will include those extra characters while encrypting.

Breaking down raw byte string with particular structure gives wrong data

I am working with one ZigBee module based on 32-bit ARM Cortex-M3. But my question is not related to ZigBee protocol itself. I have access to the source code of application layer only which should be enough for my purposes. Lower layer (APS) passes data to application layer within APSDE-DATA.indication primitive to the following application function:
void zbpro_dataRcvdHandler(zbpro_dataInd_t *data)
{
DEBUG_PRINT(DBG_APP,"\n[APSDE-DATA.indication]\r\n");
/* Output of raw bytes string for further investigation.
* Real length is unknown, 50 is approximation.
*/
DEBUG_PRINT(DBG_APP,"Raw data: \n");
DEBUG_PRINT(DBG_APP,"----------\n");
for (int i = 0; i < 50; i++){
DEBUG_PRINT(DBG_APP,"%02x ",*((uint8_t*)data+i));
}
DEBUG_PRINT(DBG_APP,"\n");
/* Output of APSDE-DATA.indication primitive field by field */
DEBUG_PRINT(DBG_APP,"Field by field: \n");
DEBUG_PRINT(DBG_APP,"----------------\n");
DEBUG_PRINT(DBG_APP,"Destination address: ");
for (int i = 0; i < 8; i++)
DEBUG_PRINT(DBG_APP,"%02x ",*((uint8_t*)data->dstAddress.ieeeAddr[i]));
DEBUG_PRINT(DBG_APP,"\n");
DEBUG_PRINT(DBG_APP,"Destination address mode: 0x%02x\r\n",*((uint8_t*)data->dstAddrMode));
DEBUG_PRINT(DBG_APP,"Destination endpoint: 0x%02x\r\n",*((uint8_t*)data->dstEndPoint));
DEBUG_PRINT(DBG_APP,"Source address mode: 0x%02x\r\n",*((uint8_t*)data->dstAddrMode));
DEBUG_PRINT(DBG_APP,"Source address: ");
for (int i = 0; i < 8; i++)
DEBUG_PRINT(DBG_APP,"%02x ",*((uint8_t*)data->srcAddress.ieeeAddr[i]));
DEBUG_PRINT(DBG_APP,"\n");
DEBUG_PRINT(DBG_APP,"Source endpoint: 0x%02x\r\n",*((uint8_t*)data->srcEndPoint));
DEBUG_PRINT(DBG_APP,"Profile Id: 0x%04x\r\n",*((uint16_t*)data->profileId));
DEBUG_PRINT(DBG_APP,"Cluster Id: 0x%04x\r\n",*((uint16_t*)data->clusterId));
DEBUG_PRINT(DBG_APP,"Message length: 0x%02x\r\n",*((uint8_t*)data->messageLength));
DEBUG_PRINT(DBG_APP,"Flags: 0x%02x\r\n",*((uint8_t*)data->flags));
DEBUG_PRINT(DBG_APP,"Security status: 0x%02x\r\n",*((uint8_t*)data->securityStatus));
DEBUG_PRINT(DBG_APP,"Link quality: 0x%02x\r\n",*((uint8_t*)data->linkQuality));
DEBUG_PRINT(DBG_APP,"Source MAC Address: 0x%04x\r\n",*((uint16_t*)data->messageLength));
DEBUG_PRINT(DBG_APP,"Message: ");
for (int i = 0; i < 13; i++){
DEBUG_PRINT(DBG_APP,"%02x ",*((uint8_t*)data->messageContents+i));
}
DEBUG_PRINT(DBG_APP,"\n");
bufm_deallocateBuffer((uint8_t *)data, CORE_MEM);
}
APSDE-DATA.indication primitive is implemented by following structures:
/**
* #brief type definition for address (union of short address and extended address)
*/
typedef union zbpro_address_tag {
uint16_t shortAddr;
uint8_t ieeeAddr[8];
} zbpro_address_t;
/**
* #brief apsde data indication structure
*/
PACKED struct zbpro_dataInd_tag {
zbpro_address_t dstAddress;
uint8_t dstAddrMode;
uint8_t dstEndPoint;
uint8_t srcAddrMode;
zbpro_address_t srcAddress;
uint8_t srcEndPoint;
uint16_t profileId;
uint16_t clusterId;
uint8_t messageLength;
uint8_t flags; /* bit0: broadcast or not; bit1: need aps ack or not; bit2: nwk key used; bit3: aps link key used */
uint8_t securityStatus; /* not-used, reserved for future */
uint8_t linkQuality;
uint16_t src_mac_addr;
uint8_t messageContents[1];
};
typedef PACKED struct zbpro_dataInd_tag zbpro_dataInd_t;
As a result I receive next:
[APSDE-DATA.indication]
Raw data:
---------
00 00 00 72 4c 19 40 00 02 e8 03 c2 30 02 fe ff 83 0a 00 e8 05 c1 11 00 11 08 58 40 72 4c ae 53 4d 3f 63 9f d8 51 da ca 87 a9 0b b3 7b 04 68 ca 87 a9
Field by field:
---------------
Destination address: 00 00 00 28 fa 44 34 00
Destination address mode: 0x12
Destination endpoint: 0xc2
Source address mode: 0x12
Source address: 13 01 12 07 02 bd 02 00
Source endpoint: 0xc2
Profile Id: 0xc940
Cluster Id: 0x90a0
Message length: 0x00
Flags: 0x00
Security status: 0x04
Link quality: 0x34
Source MAC Address: 0x90a0
Message: ae 53 4d 3f 63 9f d8 51 da ca 87 a9 0b
From this output I can see that while raw string has some expected values, dispatched fields are totally different. What is the reason of this behavior and how to fix it? Is it somehow related to ARM architecture or wrong type casting?
I don't have access to implementation of DEBUG_PRINT, but we can assume that it works properly.
There's no need to dereference in your DEBUG_PRINT statements, for example
DEBUG_PRINT(DBG_APP,"%02x ",*((uint8_t*)data->dstAddress.ieeeAddr[i]));
should be simply
DEBUG_PRINT(DBG_APP,"%02x ", data->dstAddress.ieeeAddr[i]);
so on and so forth...
Consider this code:
DEBUG_PRINT(DBG_APP,"%02x ",*((uint8_t*)data->dstAddress.ieeeAddr[i]));
Array subscripting and direct and indirect member access have higher precedence than does casting, so the third argument is equivalent to
*( (uint8_t*) (data->dstAddress.ieeeAddr[i]) )
But data->dstAddress.ieeeAddr[i] is not a pointer, it is an uint8_t. C permits you to convert it to a pointer by casting, but the result is not a pointer to the value, but rather a pointer interpretation of the value. Dereferencing it produces undefined behavior.
Similar applies to your other DEBUG_PRINT() calls.

How to resolve the "EVP_DecryptFInal_ex: bad decrypt" during file decryption

I have the following query.Could any one please suggest me a solution.
I'm working on encryption and decryption of file for first time.
I have encrypted file through command prompt using the command:
openssl enc -aes-256-cbc -in file.txt -out file.enc -k "key value" -iv "iv value"
I have to decrypt it programmatically. So I have written the program for it, but it is throwing the following error:
./exe_file enc_file_directory
...
error: 06065064: digital envelope routines: EVP_DecryptFInal_ex: bad decrypt: evp_enc.c
The program below takes input as directory path and search for encrypted file ".enc" and try to decrypt it read into buffer.
Code:
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#include <dirent.h>
#include <sys/stat.h>
#include <sys/types.h>
#include <openssl/evp.h>
#include <openssl/err.h>
#include <openssl/conf.h>
#include <libxml/globals.h>
void handleErrors(char *msg)
{
{
ERR_print_errors_fp(stderr);
printf("%s", msg);
abort();
}
}
void freeMemory(char *mem)
{
if (NULL != mem)
{
free(mem);
mem = NULL;
}
}
/* Function to decrypt the XML files */
int decryptXML(unsigned char *indata, unsigned char *outdata, int fsize)
{
int outlen1 = 0, outlen2 = 0;
unsigned char iv[] = "b63e541bc9ece19a1339df4f8720dcc3";
unsigned char ckey[] = "70bbc518c57acca2c2001694648c40ddaf19e3b4fe1376ad656de8887a0a5ec2" ;
if (NULL == indata)
{
printf ("input data is empty\n");
return 0;
}
if (0 >= fsize)
{
printf ("file size is zero\n");
return 0;
}
outdata = (char *) malloc (sizeof (char) * fsize * 2);
EVP_CIPHER_CTX ctx;
EVP_CIPHER_CTX_init(&ctx);
if (! EVP_DecryptInit_ex (&ctx, EVP_aes_256_cbc(), NULL, ckey, iv))
{
EVP_CIPHER_CTX_cleanup(&ctx);
handleErrors("DInit");
}
if (! EVP_DecryptUpdate (&ctx, outdata, &outlen1, indata, fsize))
{
EVP_CIPHER_CTX_cleanup(&ctx);
handleErrors("DUpdate");
}
if (! EVP_DecryptFinal_ex (&ctx, outdata + outlen1, &outlen2))
{
EVP_CIPHER_CTX_cleanup(&ctx);
handleErrors("DFinal");
}
EVP_CIPHER_CTX_cleanup(&ctx);
return outlen1+outlen2;
}
int isDirectory(char *path)
{
DIR *dir = NULL;
FILE *fin = NULL, *fout = NULL;
int enc_len = 0, dec_len = 0, fsize = 0, ksize = 0;
unsigned char *indata = NULL, *outdata = NULL;
char buff[BUFFER_SIZE], file_path[BUFFER_SIZE], cur_dir[BUFFER_SIZE];
struct dirent *in_dir;
struct stat s;
if (NULL == (dir = opendir(path)))
{
printf ("ERROR: Failed to open the directory %s\n", path);
perror("cannot open.");
exit(1);
}
while (NULL != (in_dir = readdir(dir)))
{
if (!strcmp (in_dir->d_name, ".") || !strcmp(in_dir->d_name, ".."))
continue;
sprintf (buff, "%s/%s", path, in_dir->d_name);
if (-1 == stat(buff, &s))
{
perror("stat");
exit(1);
}
if (S_ISDIR(s.st_mode))
{
isDirectory(buff);
}
else
{
strcpy(file_path, buff);
if (strstr(file_path, ".enc"))
{
/* File to be decrypted */
fout = fopen(file_path,"rb");
fseek (fout, 0L, SEEK_END);
fsize = ftell(fout);
fseek (fout, 0L, SEEK_SET);
indata = (char*)malloc(fsize);
fread (indata, sizeof(char), fsize, fout);
if (NULL == fout)
{
perror("Cannot open enc file: ");
return 1;
}
dec_len = decryptXML (indata, outdata, fsize);
outdata[dec_len] = '\0';
printf ("%s\n", outdata);
fclose (fin);
fclose (fout);
}
}
}
closedir(dir);
freeMemory(outdata);
freeMemory(indata);
return 1;
}
int main(int argc, char *argv[])
{
int result;
if (argc != 2)
{
printf ("Usage: <executable> path_of_the_files\n");
return -1;
}
ERR_load_crypto_strings();
OpenSSL_add_all_algorithms();
OPENSSL_config(NULL);
/* Checking for the directory existance */
result = isDirectory(argv[1]);
EVP_cleanup();
ERR_free_strings();
if (0 == result)
return 1;
else
return 0;
}
Thank you.
This message digital envelope routines: EVP_DecryptFInal_ex: bad decrypt can also occur when you encrypt and decrypt with an incompatible versions of openssl.
The issue I was having was that I was encrypting on Windows which had version 1.1.0 and then decrypting on a generic Linux system which had 1.0.2g.
It is not a very helpful error message!
Working solution:
A possible solution from #AndrewSavinykh that worked for many (see the comments):
Default digest has changed between those versions from md5 to sha256. One can specify the default digest on the command line as -md sha256 or -md md5 respectively
I experienced a similar error reply while using the openssl command line interface, while having the correct binary key (-K). The option "-nopad" resolved the issue:
Example generating the error:
echo -ne "\x32\xc8\xde\x5c\x68\x19\x7e\x53\xa5\x75\xe1\x76\x1d\x20\x16\xb2\x72\xd8\x40\x87\x25\xb3\x71\x21\x89\xf6\xca\x46\x9f\xd0\x0d\x08\x65\x49\x23\x30\x1f\xe0\x38\x48\x70\xdb\x3b\xa8\x56\xb5\x4a\xc6\x09\x9e\x6c\x31\xce\x60\xee\xa2\x58\x72\xf6\xb5\x74\xa8\x9d\x0c" | openssl aes-128-cbc -d -K 31323334353637383930313233343536 -iv 79169625096006022424242424242424 | od -t x1
Result:
bad decrypt
140181876450560:error:06065064:digital envelope
routines:EVP_DecryptFinal_ex:bad decrypt:../crypto/evp/evp_enc.c:535:
0000000 2f 2f 07 02 54 0b 00 00 00 00 00 00 04 29 00 00
0000020 00 00 04 a9 ff 01 00 00 00 00 04 a9 ff 02 00 00
0000040 00 00 04 a9 ff 03 00 00 00 00 0d 79 0a 30 36 38
Example with correct result:
echo -ne "\x32\xc8\xde\x5c\x68\x19\x7e\x53\xa5\x75\xe1\x76\x1d\x20\x16\xb2\x72\xd8\x40\x87\x25\xb3\x71\x21\x89\xf6\xca\x46\x9f\xd0\x0d\x08\x65\x49\x23\x30\x1f\xe0\x38\x48\x70\xdb\x3b\xa8\x56\xb5\x4a\xc6\x09\x9e\x6c\x31\xce\x60\xee\xa2\x58\x72\xf6\xb5\x74\xa8\x9d\x0c" | openssl aes-128-cbc -d -K 31323334353637383930313233343536 -iv 79169625096006022424242424242424 -nopad | od -t x1
Result:
0000000 2f 2f 07 02 54 0b 00 00 00 00 00 00 04 29 00 00
0000020 00 00 04 a9 ff 01 00 00 00 00 04 a9 ff 02 00 00
0000040 00 00 04 a9 ff 03 00 00 00 00 0d 79 0a 30 36 38
0000060 30 30 30 34 31 33 31 2f 2f 2f 2f 2f 2f 2f 2f 2f
0000100
I think the Key and IV used for encryption using command line and decryption using your program are not same.
Please note that when you use the "-k" (different from "-K"), the input given is considered as a password from which the key is derived. Generally in this case, there is no need for the "-iv" option as both key and password will be derived from the input given with "-k" option.
It is not clear from your question, how you are ensuring that the Key and IV are same between encryption and decryption.
In my suggestion, better use "-K" and "-iv" option to explicitly specify the Key and IV during encryption and use the same for decryption. If you need to use "-k", then use the "-p" option to print the key and iv used for encryption and use the same in your decryption program.
More details can be obtained at https://www.openssl.org/docs/manmaster/apps/enc.html
Errors:
"Bad encrypt / decrypt"
"gitencrypt_smudge: FAILURE: openssl error decrypting file"
There are various error strings that are thrown from openssl, depending on respective versions, and scenarios. Below is the checklist I use in case of openssl related issues:
Ideally, openssl is able to encrypt/decrypt using same key (+ salt) & enc algo only.
Ensure that openssl versions (used to encrypt/decrypt), are compatible. For eg. the hash used in openssl changed at version 1.1.0 from MD5 to SHA256. This produces a different key from the same password.
Fix:
add "-md md5" in 1.1.0 to decrypt data from lower versions, and
add "-md sha256 in lower versions to decrypt data from 1.1.0
Ensure that there is a single openssl version installed in your machine. In case there are multiple versions installed simultaneously (in my machine, these were installed :- 'LibreSSL 2.6.5' and 'openssl 1.1.1d'), make the sure that only the desired one appears in your PATH variable.
This message can also occur when you specify the incorrect decryption password (yeah, lame, but not quite obvious to realize this from the error message, huh?).
I was using the command line to decrypt the recent DataBase backup for my auxiliary tool and suddenly faced this issue.
Finally, after 10 mins of grief and plus reading through this question/answers I have remembered that the password is different and everything worked just fine with the correct password.
You should use decrypted private key. For example: youprivatekey.decrypted.key. You can run this command to decrypt your private key file.
openssl rsa -in <encrypted_private.key> -out <decrypted_private.key>
Enter password:
Enter pass phrase for encrypted_private.key: <enter the password>
Wait:
writing RSA key
it's done...
My case, the server was encrypting with padding disabled. But the client was trying to decrypt with the padding enabled.
While using EVP_CIPHER*, by default the padding is enabled. To disable explicitly we need to do
EVP_CIPHER_CTX_set_padding(context, 0);
So non matching padding options can be one reason.

Creating a DER formatted ECDSA signature from raw r and s

I have a raw ECDSA signature: R and S values. I need a DER-encoded version of the signature. Is there a straightforward way to do this in openssl using the c interface?
My current attempt is to use i2d_ECDSA_SIG(const ECDSA_SIG *sig, unsigned char **pp) to populate an ECDSA_SIG*. The call returns non-zero but the target buffer doesn't seem to be changed.
I'm intiailly filling my ECDSA_SIG wtih r and s values. I don't see any errors. The man page says r and s should be allocated when I call ECDSA_SIG_new
ECDSA_SIG* ec_sig = ECDSA_SIG_new();
if (NULL == BN_bin2bn(sig, 32, (ec_sig->r))) {
dumpOpenSslErrors();
}
DBG("post r :%s\n", BN_bn2hex(ec_sig->r));
if (NULL == BN_bin2bn(sig + 32, 32, (ec_sig->s))) {
dumpOpenSslErrors();
}
DBG("post s :%s\n", BN_bn2hex(ec_sig->s));
S and R are now set:
post r :397116930C282D1FCB71166A2D06728120CF2EE5CF6CCD4E2D822E8E0AE24A30
post s :9E997D4718A7603942834FBDD22A4B856FC4083704EDE62033CF1A77CB9822A9
now to make the encoded signature.
int sig_size = i2d_ECDSA_SIG(ec_sig, NULL);
if (sig_size > 255) {
DBG("signature is too large wants %d\n", sig_size);
}
DBG("post i2d:%s\n", BN_bn2hex(ec_sig->s));
s hasn't changed:
post i2d:9E997D4718A7603942834FBDD22A4B856FC4083704EDE62033CF1A77CB9822A9
At this point I have more than enough bytes ready and I set the target to all 6s so it's easy to see what changes.
unsigned char* sig_bytes = new unsigned char[256];
memset(sig_bytes, 6, 256);
sig_size = i2d_ECDSA_SIG(ec_sig, (&sig_bytes));
DBG("New size %d\n", sig_size);
DBG("post i2d:%s\n", BN_bn2hex(ec_sig->s));
hexDump("Sig ", (const byte*)sig_bytes, sig_size);
The new size is 71
New size 71 and s iis stiill the same:
`post i2d:9E997D4718A7603942834FBDD22A4B856FC4083704EDE62033CF1A77CB9822A9`
The hex dump is all 6s.
--Sig --
0x06: 0x06: 0x06: 0x06: 0x06: 0x06: 0x06: 0x06:
0x06: ...
The dump is still all 6s even though the call didn't return 0. What am I missing tying to DER encode this raw signature?
i2d_ECDSA_SIG modifies its second argument, increasing it by the size of the signature. From ecdsa.h:
/** DER encode content of ECDSA_SIG object (note: this function modifies *pp
* (*pp += length of the DER encoded signature)).
* \param sig pointer to the ECDSA_SIG object
* \param pp pointer to a unsigned char pointer for the output or NULL
* \return the length of the DER encoded ECDSA_SIG object or 0
*/
int i2d_ECDSA_SIG(const ECDSA_SIG *sig, unsigned char **pp);
So you need to keep track of the original value of sig_bytes when you call i2d_ECDSA_SIG:
int sig_size = i2d_ECDSA_SIG(ec_sig, NULL);
unsigned char *sig_bytes = malloc(sig_size);
unsigned char *p;
memset(sig_bytes, 6, sig_size);
p = sig_bytes;
new_sig_size = i2d_ECDSA_SIG(_sig, &p);
// The value of p is now sig_bytes + sig_size, and the signature resides at sig_bytes
Output:
30 45 02 20 39 71 16 93 0C 28 2D 1F CB 71 16 6A
2D 06 72 81 20 CF 2E E5 CF 6C CD 4E 2D 82 2E 8E
0A E2 4A 30 02 21 00 9E 99 7D 47 18 A7 60 39 42
83 4F BD D2 2A 4B 85 6F C4 08 37 04 ED E6 20 33
CF 1A 77 CB 98 22 A9

Extracting data from struct sk_buff

I'm attempting to extract data from a struct sk_buff, but have not received the output I am expecting. The frame in question is 34 bytes; a 14-byte Ethernet header wrapped around an 8-byte (experimental protocol) header:
struct monitoring_hdr {
u8 version;
u8 type;
u8 reserved;
u8 haddr_len;
u32 clock;
} __packed;
After this header, there are two, variable-length hardware addresses (their lengths are dictated by the haddr_len field above). In the example here, they are both 6 bytes long.
The following code extracts the header (the struct) correctly, but not the two MAC addresses that follow.
Sender side:
...
skb = alloc_skb(mtu, GFP_ATOMIC);
if (unlikely(!skb))
return;
skb_reserve(skb, ll_hlen);
skb_reset_network_header(skb);
nwp = (struct monitoring_hdr *)skb_put(skb, hdr_len);
/* ... Set up fields in struct monitoring_hdr ... */
memcpy(skb_put(skb, dev->addr_len), src, dev->addr_len);
memcpy(skb_put(skb, dev->addr_len), dst, dev->addr_len);
...
Receiver side:
...
skb_reset_network_header(skb);
nwp = (struct monitoring_hdr *)skb_network_header(skb);
src = skb_pull(skb, nwp->haddr_len);
dst = skb_pull(skb, nwp->haddr_len);
...
Expected output:
I used tcpdump to capture the packet in question on the wire, and saw this (it was actually padded to 60 bytes by the sender's NIC, which I've omitted):
0000 | 00 90 f5 c6 44 5b 00 0e c6 89 04 2f c0 df 01 03
0010 | 00 06 d0 ba 8c 88 00 0e c6 89 04 2f 00 90 f5 c6
0020 | 44 5b
The first 14 bytes is the Ethernet header. The following 8 bytes (starting with 01 and ending with 88) should be the bytes put into the struct monitoring_hdr, which executes correctly. Then, I am expecting the following MAC addresses to be found:
src = 00 0e c6 89 04 2f
dst = 00 90 f5 c6 44 5b
Actual output:
However, the data that I receive is shifted two bytes to the left:
src = 8c 88 00 0e c6 89
dst = 04 2f 00 90 f5 c6
Can anyone see a logical flaw in above code? Or is there a better way to do this? I've also tried skb_pull in place of skb_network_header on the receiving side, but that resulted in a kernel panic.
Thanks in advance for any help.
SOLUTION:
The pointer to the first byte of the data in the sk_buff was not being pointed to by src as it should have been. I ended up using the following:
...
skb_reset_network_header(skb);
nwp = (struct monitoring_hdr *)skb_network_header(skb);
skb_pull(skb, offsetof(struct monitoring_hdr, haddrs_begin));
src = skb->data;
dst = skb_pull(skb, nwp->haddr_len);
...
Looking at the skbuff.h header, the functions you are using look like this:
static inline void skb_reset_network_header(struct sk_buff *skb)
{
skb->network_header = skb->data - skb->head;
}
static inline unsigned char *skb_network_header(const struct sk_buff *skb)
{
return skb->head + skb->network_header;
}
extern unsigned char *skb_pull(struct sk_buff *skb, unsigned int len);
static inline unsigned char *__skb_pull(struct sk_buff *skb, unsigned int len)
{
skb->len -= len;
BUG_ON(skb->len < skb->data_len);
return skb->data += len;
}
So first, I would try printing out skb->data and skb->head to make sure they are referencing the parts of the packet you expect them to. Since you are using a custom protocol here, perhaps there is a bug in the header processing code which is causing skb->data to be set incorrectly.
Also, looking at the definitions of sky_network_header and skb_pull makes me think perhaps you are using them incorrectly. Shouldn't the first 6-byte addr be at the location pointed to be the return value of skb_network_header()? It looks like that function adds the length of the header block to the head of the buffer, which should result in a pointer to your first data value.
Similarly, it looks like skb_pull() adds the length of the field you pass in and returns the pointer to the next byte. So you probably want something more like this:
src = skb_network_header(skb);
dst = skb_pull(skb, nwp->haddr_len);
I hope that helps. I'm sorry that this is not an exact answer.

Resources