Breaking down raw byte string with particular structure gives wrong data - c

I am working with one ZigBee module based on 32-bit ARM Cortex-M3. But my question is not related to ZigBee protocol itself. I have access to the source code of application layer only which should be enough for my purposes. Lower layer (APS) passes data to application layer within APSDE-DATA.indication primitive to the following application function:
void zbpro_dataRcvdHandler(zbpro_dataInd_t *data)
{
DEBUG_PRINT(DBG_APP,"\n[APSDE-DATA.indication]\r\n");
/* Output of raw bytes string for further investigation.
* Real length is unknown, 50 is approximation.
*/
DEBUG_PRINT(DBG_APP,"Raw data: \n");
DEBUG_PRINT(DBG_APP,"----------\n");
for (int i = 0; i < 50; i++){
DEBUG_PRINT(DBG_APP,"%02x ",*((uint8_t*)data+i));
}
DEBUG_PRINT(DBG_APP,"\n");
/* Output of APSDE-DATA.indication primitive field by field */
DEBUG_PRINT(DBG_APP,"Field by field: \n");
DEBUG_PRINT(DBG_APP,"----------------\n");
DEBUG_PRINT(DBG_APP,"Destination address: ");
for (int i = 0; i < 8; i++)
DEBUG_PRINT(DBG_APP,"%02x ",*((uint8_t*)data->dstAddress.ieeeAddr[i]));
DEBUG_PRINT(DBG_APP,"\n");
DEBUG_PRINT(DBG_APP,"Destination address mode: 0x%02x\r\n",*((uint8_t*)data->dstAddrMode));
DEBUG_PRINT(DBG_APP,"Destination endpoint: 0x%02x\r\n",*((uint8_t*)data->dstEndPoint));
DEBUG_PRINT(DBG_APP,"Source address mode: 0x%02x\r\n",*((uint8_t*)data->dstAddrMode));
DEBUG_PRINT(DBG_APP,"Source address: ");
for (int i = 0; i < 8; i++)
DEBUG_PRINT(DBG_APP,"%02x ",*((uint8_t*)data->srcAddress.ieeeAddr[i]));
DEBUG_PRINT(DBG_APP,"\n");
DEBUG_PRINT(DBG_APP,"Source endpoint: 0x%02x\r\n",*((uint8_t*)data->srcEndPoint));
DEBUG_PRINT(DBG_APP,"Profile Id: 0x%04x\r\n",*((uint16_t*)data->profileId));
DEBUG_PRINT(DBG_APP,"Cluster Id: 0x%04x\r\n",*((uint16_t*)data->clusterId));
DEBUG_PRINT(DBG_APP,"Message length: 0x%02x\r\n",*((uint8_t*)data->messageLength));
DEBUG_PRINT(DBG_APP,"Flags: 0x%02x\r\n",*((uint8_t*)data->flags));
DEBUG_PRINT(DBG_APP,"Security status: 0x%02x\r\n",*((uint8_t*)data->securityStatus));
DEBUG_PRINT(DBG_APP,"Link quality: 0x%02x\r\n",*((uint8_t*)data->linkQuality));
DEBUG_PRINT(DBG_APP,"Source MAC Address: 0x%04x\r\n",*((uint16_t*)data->messageLength));
DEBUG_PRINT(DBG_APP,"Message: ");
for (int i = 0; i < 13; i++){
DEBUG_PRINT(DBG_APP,"%02x ",*((uint8_t*)data->messageContents+i));
}
DEBUG_PRINT(DBG_APP,"\n");
bufm_deallocateBuffer((uint8_t *)data, CORE_MEM);
}
APSDE-DATA.indication primitive is implemented by following structures:
/**
* #brief type definition for address (union of short address and extended address)
*/
typedef union zbpro_address_tag {
uint16_t shortAddr;
uint8_t ieeeAddr[8];
} zbpro_address_t;
/**
* #brief apsde data indication structure
*/
PACKED struct zbpro_dataInd_tag {
zbpro_address_t dstAddress;
uint8_t dstAddrMode;
uint8_t dstEndPoint;
uint8_t srcAddrMode;
zbpro_address_t srcAddress;
uint8_t srcEndPoint;
uint16_t profileId;
uint16_t clusterId;
uint8_t messageLength;
uint8_t flags; /* bit0: broadcast or not; bit1: need aps ack or not; bit2: nwk key used; bit3: aps link key used */
uint8_t securityStatus; /* not-used, reserved for future */
uint8_t linkQuality;
uint16_t src_mac_addr;
uint8_t messageContents[1];
};
typedef PACKED struct zbpro_dataInd_tag zbpro_dataInd_t;
As a result I receive next:
[APSDE-DATA.indication]
Raw data:
---------
00 00 00 72 4c 19 40 00 02 e8 03 c2 30 02 fe ff 83 0a 00 e8 05 c1 11 00 11 08 58 40 72 4c ae 53 4d 3f 63 9f d8 51 da ca 87 a9 0b b3 7b 04 68 ca 87 a9
Field by field:
---------------
Destination address: 00 00 00 28 fa 44 34 00
Destination address mode: 0x12
Destination endpoint: 0xc2
Source address mode: 0x12
Source address: 13 01 12 07 02 bd 02 00
Source endpoint: 0xc2
Profile Id: 0xc940
Cluster Id: 0x90a0
Message length: 0x00
Flags: 0x00
Security status: 0x04
Link quality: 0x34
Source MAC Address: 0x90a0
Message: ae 53 4d 3f 63 9f d8 51 da ca 87 a9 0b
From this output I can see that while raw string has some expected values, dispatched fields are totally different. What is the reason of this behavior and how to fix it? Is it somehow related to ARM architecture or wrong type casting?
I don't have access to implementation of DEBUG_PRINT, but we can assume that it works properly.

There's no need to dereference in your DEBUG_PRINT statements, for example
DEBUG_PRINT(DBG_APP,"%02x ",*((uint8_t*)data->dstAddress.ieeeAddr[i]));
should be simply
DEBUG_PRINT(DBG_APP,"%02x ", data->dstAddress.ieeeAddr[i]);
so on and so forth...

Consider this code:
DEBUG_PRINT(DBG_APP,"%02x ",*((uint8_t*)data->dstAddress.ieeeAddr[i]));
Array subscripting and direct and indirect member access have higher precedence than does casting, so the third argument is equivalent to
*( (uint8_t*) (data->dstAddress.ieeeAddr[i]) )
But data->dstAddress.ieeeAddr[i] is not a pointer, it is an uint8_t. C permits you to convert it to a pointer by casting, but the result is not a pointer to the value, but rather a pointer interpretation of the value. Dereferencing it produces undefined behavior.
Similar applies to your other DEBUG_PRINT() calls.

Related

ESP32 - Extract Manufacturer Specific Data from advertisement

TL;DR: One ESP32 broadcasts via BLE (already working), another ESP32 listens. I am unable to parse the received advertisements correctly, i.e. can't extract the manufacturer specific data!
Goal: One ESP32 (call A) broadcasts an advertisement containing manufacturer specific data (MSD), which is received by another ESP32 (call B) who prints that data to the console.
I am using the new RISC-V based ESP32C3 which supports Bluetooth 5.0, though everything I do is based on Bluetooth 4.2.
Where I am:
A can broadcast a valid advertisement (checked with an Ubertooth/Wireshark)
B receives something from A, though the packet only very loosely corresponds to the (correct) packet received by the Ubertooth.
Code:
Structs used to set up A:
// Struct defining advertising parameters
static esp_ble_adv_params_t ble_adv_params = {
.adv_int_min = 0x0800,
.adv_int_max = 0x0900,
.adv_type = ADV_TYPE_NONCONN_IND,
.own_addr_type = BLE_ADDR_TYPE_PUBLIC,
.channel_map = ADV_CHNL_ALL,
.adv_filter_policy = ADV_FILTER_ALLOW_SCAN_ANY_CON_ANY, // NO IDEA ABOUT THAT ONE, ESPECIALLY GIVEN THAT WE SEND NONCONNECTABLE AND NONSCANNABLE ADVERTISEMENTS!
};
Struct used to define the payload of the advertisement:
esp_ble_adv_data_t ble_adv_data =
{
.set_scan_rsp = false,
.include_name = true, // "Name" refers to the name passed as an argument within "esp_ble_gap_set_device_name()"
.include_txpower = false,
.min_interval = 0xffff, // Not sure what those are for, as the chosen advertisement packets are non-connectable...
.max_interval = 0xFFFF,
.appearance = 64, // 64 is appearance ID of phone. Only to maybe be able to find it on my Galaxy phone
.manufacturer_len = ble_adv_payload_len,
.p_manufacturer_data = (uint8_t *) ble_adv_payload, // Currently just some human-readable string used for debugging
.service_data_len = 0,
.p_service_data = NULL,
.service_uuid_len = 0,
.p_service_uuid = NULL,
.flag = 0
};
As the Ubertooth receives correct packets sent by A, I reckon A has been set up correctly.
Struct used to define scanning behavior of B:
// Struct defining scanning parameters
static esp_ble_scan_params_t ble_scan_params = {
.scan_type = BLE_SCAN_TYPE_PASSIVE, // Don't send scan requests upon receiving an advertisement
.own_addr_type = BLE_ADDR_TYPE_PUBLIC, // Use (static) public address, makes debugging easier
.scan_filter_policy = BLE_SCAN_FILTER_ALLOW_ONLY_WLST, // Consider all advertisements
.scan_interval = 0x50, // Time between each scan window begin
.scan_window = 0x30, // Length of scan window
.scan_duplicate = BLE_SCAN_DUPLICATE_DISABLE // Filters out duplicate advertisements, e.g. if an advertisement received k times, it is only reported once
};
The majority of the remaining code is just boilerplate, the only really relevant part is the callback function of B, which gets called whenever a GAP-Event occurs (GAP-Events can be found in esp_gap_ble_api.h, beginning on line 138).
B's callback function:
void esp_ble_callback_fun(esp_gap_ble_cb_event_t event, esp_ble_gap_cb_param_t *param)
{
// Do a case split on the different events. For now, only "ESP_GAP_BLE_SCAN_RESULT_EVT" is of interest
switch (event) {
case ESP_GAP_BLE_SCAN_RESULT_EVT:
if ((param->scan_rst).search_evt != ESP_GAP_SEARCH_INQ_RES_EVT) {
printf(
"B: Callback function received a non-\"ESP_GAP_SEARCH_INQ_RES_EVT\" event!\n");
return;
}
// Copy the parameter UNION
esp_ble_gap_cb_param_t scan_result = *param;
// Create a POINTER to the "bda" entry, which is accessed by interpreting the scan_result UNION as a scan_rst STRUCT
// Note that "esp_bd_addr_t" is a typedef for a uint8_t array!
esp_bd_addr_t *ble_adv_addr = &scan_result.scan_rst.bda;
printf("\n-------------------------\nMessage: \n");
uint8_t adv_data_len = scan_result.scan_rst.adv_data_len;
uint8_t *adv_data = scan_result.scan_rst.ble_adv;
printf("Message length: %i\n", adv_data_len);
printf("Message body:\n"); // NOT SO SURE ABOUT THIS!
for(int i = 0; i < adv_data_len; ++i)
{
printf("%X", adv_data[i]);
}
printf("\n-------------------------\n");
break; // #suppress("No break at end of case")
default:
// NOT SUPPORTED! JUST IGNORE THEM!
break;
}
}
Sample output:
Serial output by B:
Message:
Message length: 22
Message body:
31940069414C4943454FF486579512FFFFFFFF
Packet as received in Wireshark:
Frame 78135: 61 bytes on wire (488 bits), 61 bytes captured (488 bits) on interface /tmp/pipe, id 0
PPI version 0, 24 bytes
Version: 0
Flags: 0x00
.... ...0 = Alignment: Not aligned
0000 000. = Reserved: 0x00
Header length: 24
DLT: 251
Reserved: 36750c0000620900e05b56b811051000
Bluetooth
Bluetooth Low Energy Link Layer
Access Address: 0x8e89bed6
Packet Header: 0x1c22 (PDU Type: ADV_NONCONN_IND, ChSel: #2, TxAdd: Public)
.... 0010 = PDU Type: ADV_NONCONN_IND (0x2)
...0 .... = RFU: 0
..1. .... = Channel Selection Algorithm: #2
.0.. .... = Tx Address: Public
0... .... = Reserved: False
Length: 28
Advertising Address: Espressi_43:3e:d6 (7c:df:a1:43:3e:d6)
Advertising Data
Appearance: Generic Phone
Device Name: ALICE
Manufacturer Specific
Slave Connection Interval Range: 81918.8 - 81918.8 msec
Connection Interval Min: 65535 (81918.8 msec)
Connection Interval Max: 65535 (81918.8 msec)
CRC: 0x905934
0000 00 00 18 00 fb 00 00 00 36 75 0c 00 00 62 09 00 ........6u...b..
0010 e0 5b 56 b8 11 05 10 00 d6 be 89 8e 22 1c d6 3e .[V........."..>
0020 43 a1 df 7c 03 19 40 00 06 09 41 4c 49 43 45 04 C..|..#...ALICE.
0030 ff 48 65 79 05 12 ff ff ff ff 09 9a 2c .Hey........,
For a packet without MSD, I computed the longest common subsequence between the binary representation of a packet received by B (i.e. content of adv_data) and the packet received by the Ubertooth. They only had 46 bits in common, a weird number for sure!
My questions:
Am I right in assuming that adv_data, i.e. scan_result.scan_rst.ble_adv holds the raw BLE packet? The definition of ble_adv (esp_gap_ble_api.h, line 936) is incredibly confusing IMO, as it is called "Received EIR" despite EIRs only being introduced in Bluetooth 5.0...
How can I extract the MSD from a received BLE advertisement?
EIR was introduced a long time ago and was present in Bluetooth 4.0.
You should use %02X when printing hex strings since that will include leading zeros.
ble_adv contains only the EIR content, not the whole packet.
EIR uses length, type, value encoding. Your manufacturing data is encoded like this:
4 (length)
0xff (manufacturer data)
Hey (content)
Note that the two bytes of the manufacturer data content should be a Bluetooth SIG registered company id.

Extra Padding while sending a ethernet packet?

I was writing a code to send a packet to the other mac Address. I ma using a struct to create a packet as mentioned below:
typedef struct vlink_prm_in_s
{
vlink_header_t header;
uint32_t address;
uint16_t length;
}vlink_prm_in_t;
and when I want to send a packet I do following steps:
vlink_prm_in_t g_pkt;
g_pkt.header.verCmd = 0x43;
g_pkt.header.reverseVerCmd = ~(g_pkt.header.verCmd);
g_pkt.address = 0x11111111;
g_pkt.length = 0x2222;
memcpy(sendbuf+headerLen, &g_pkt, sizeof(g_pkt));
printf("%x\n", sendbuf[headerLen+4] );
payloadLen = sizeof(g_pkt);
sendbuf[headerLen+payloadLen] = 0xA5;
payloadLen++;
when I send the packet I get the following packet when I track it in WireShark:
aa bb cc dd ee 66 98 ee cb 03 be 1d ea e8 43 bc 00 00 11 11 11 11 22 22 00 00 a5
I don't know where do I get those extra zeros(highlighted) from ? Thanks.
After looking on internet more I found out that struct should be purely increasing or decreasing order (or all the variables should be same size). This way you can remove the extra zeros. So I did as follows:
typedef struct vlink_prm_in_s
{
vlink_header_t header;
uint16_t address1;
uint16_t address2;
uint16_t length;
}vlink_prm_in_t;
This fixed my problem.

CTR-AES256 Encrypt does not match OpenSSL -aes-256-ctr

My problem is that I cannot get the AES 256 CTR output from the C code below to match the output from the OpenSSL command below.
The C code produces this:
5f b7 18 d1 28 62 7f 50 35 ba e9 67 a7 17 ab 22
f9 e4 09 ce 23 26 7b 93 82 02 d3 87 eb 01 26 ac
96 2c 01 8c c8 af f3 de a4 18 7f 29 46 00 2e 00
The OpenSSL command line produces this:
5f b7 18 d1 28 62 7f 50 35 ba e9 67 a7 17 ab 22
3c 01 11 bd 39 14 74 76 31 57 a6 53 f9 00 09 b4
6f a9 49 bc 6d 00 77 24 2d ef b9 c4
Notice the first 16 bytes are the same because the nonceIV was the same, however, when the nonceIV is updated on the next iteration, then XOR'd with the plaintext, the next 16 bytes differ and so on...?
I cannot understand why that happens? Anyone know why the hex codes are different after the first 16 byte chunk?
Disclaimer: I'm no C expert.
Thanks!!
Fox.txt
The quick brown fox jumped over the lazy dog
Then run the following OpenSSL command to create foxy.exe
openssl enc -aes-256-ctr -in fox.txt -out foxy.exe -K 603deb1015ca71be2b73aef0857d77811f352c073b6108d72d9810a30914dff4 -iv f0f1f2f3f4f5f6f7f8f9fafbfcfdfeff -nosalt -nopad -p
Here's what foxy.exe contains:
5f b7 18 d1 28 62 7f 50 35 ba e9 67 a7 17 ab 22
3c 01 11 bd 39 14 74 76 31 57 a6 53 f9 00 09 b4
6f a9 49 bc 6d 00 77 24 2d ef b9 c4
Here's the code.
#include <Windows.h>
// What is AES CTR
//
// AES - CTR (counter) mode is another popular symmetric encryption algorithm.
//
// It is advantageous because of a few features :
// 1. The data size does not have to be multiple of 16 bytes.
// 2. The encryption or decryption for all blocks of the data can happen in parallel, allowing faster implementation.
// 3. Encryption and decryption use identical implementation.
//
// Very important note : choice of initial counter is critical to the security of CTR mode.
// The requirement is that the same counter and AES key combination can never to used to encrypt more than more one 16 - byte block.
// Notes
// -----
// * CTR mode does not require padding to block boundaries.
//
// * The IV size of AES is 16 bytes.
//
// * CTR mode doesn't need separate encrypt and decrypt method. Encryption key can be set once.
//
// * AES is a block cipher : it takes as input a 16 byte plaintext block,
// a secret key (16, 24 or 32 bytes) and outputs another 16 byte ciphertext block.
//
// References
// ----------
// https://en.wikipedia.org/wiki/Block_cipher_mode_of_operation#Counter_.28CTR.29
// https://www.cryptopp.com/wiki/CTR_Mode#Counter_Increment
// https://modexp.wordpress.com/2016/03/10/windows-ctr-mode-with-crypto-api/
// https://msdn.microsoft.com/en-us/library/windows/desktop/jj650836(v=vs.85).aspx
// http://www.cryptogrium.com/aes-ctr.html
// http://www.bierkandt.org/encryption/symmetric_encryption.php
#define IV_SIZE 16
#define AES_BLOCK_SIZE 16
typedef struct _key_hdr_t {
PUBLICKEYSTRUC hdr; // Indicates the type of BLOB and the algorithm that the key uses.
DWORD len; // The size, in bytes, of the key material.
char key[32]; // The key material.
} key_hdr;
// NIST specifies two types of counters.
//
// First is a counter which is made up of a nonce and counter.
// The nonce is random, and the remaining bytes are counter bytes (which are incremented).
// For example, a 16 byte block cipher might use the high 8 bytes as a nonce, and the low 8 bytes as a counter.
//
// Second is a counter block, where all bytes are counter bytes and can be incremented as carries are generated.
// For example, in a 16 byte block cipher, all 16 bytes are counter bytes.
//
// This uses the second method, which means the entire byte block is treated as counter bytes.
void IncrementCounterByOne(char *inout)
{
int i;
for (i = 16 - 1; i >= 0; i--) {
inout[i]++;
if (inout[i]) {
break;
}
}
}
void XOR(char *plaintext, char *ciphertext, int plaintext_len)
{
int i;
for (i = 0; i < plaintext_len; i++)
{
plaintext[i] ^= ciphertext[i];
}
}
unsigned int GetAlgorithmIdentifier(unsigned int aeskeylenbits)
{
switch (aeskeylenbits)
{
case 128:
return CALG_AES_128;
case 192:
return CALG_AES_192;
case 256:
return CALG_AES_256;
default:
return 0;
}
}
unsigned int GetKeyLengthBytes(unsigned int aeskeylenbits)
{
return aeskeylenbits / 8;
}
void SetKeyData(key_hdr *key, unsigned int aeskeylenbits, char *pKey)
{
key->hdr.bType = PLAINTEXTKEYBLOB;
key->hdr.bVersion = CUR_BLOB_VERSION;
key->hdr.reserved = 0;
key->hdr.aiKeyAlg = GetAlgorithmIdentifier(aeskeylenbits);
key->len = GetKeyLengthBytes(aeskeylenbits);
memmove(key->key, pKey, key->len);
}
// point = pointer to the start of the plaintext, extent is the size (44 bytes)
void __stdcall AESCTR(char *point, int extent, char *pKey, char *pIV, unsigned int aeskeylenbits, char *bufOut)
{
HCRYPTPROV hProv;
HCRYPTKEY hSession;
key_hdr key;
DWORD IV_len;
div_t aesblocks;
char nonceIV[64];
char tIV[64];
char *bufIn;
bufIn = point;
memmove(nonceIV, pIV, IV_SIZE);
SetKeyData(&key, aeskeylenbits, pKey);
CryptAcquireContext(&hProv, NULL, NULL, PROV_RSA_AES, CRYPT_VERIFYCONTEXT | CRYPT_SILENT);
CryptImportKey(hProv, (PBYTE)&key, sizeof(key), 0, CRYPT_NO_SALT, &hSession);
aesblocks = div(extent, AES_BLOCK_SIZE);
while (aesblocks.quot != 0)
{
IV_len = IV_SIZE;
memmove(tIV, nonceIV, IV_SIZE);
CryptEncrypt(hSession, 0, FALSE, 0, (BYTE *)tIV, &IV_len, sizeof(tIV));
XOR(bufIn, tIV, AES_BLOCK_SIZE);
IncrementCounterByOne(nonceIV);
bufIn += AES_BLOCK_SIZE;
aesblocks.quot--;
}
if (aesblocks.rem != 0)
{
memmove(tIV, nonceIV, IV_SIZE);
CryptEncrypt(hSession, 0, TRUE, 0, (BYTE *)tIV, &IV_len, sizeof(tIV));
XOR(bufIn, tIV, aesblocks.rem);
}
memmove(bufOut, point, extent);
CryptDestroyKey(hSession);
CryptReleaseContext(hProv, 0);
}
I was able to get this working by the suggested pseudocode on the M$ CryptEncrypt() remarks section https://msdn.microsoft.com/en-us/library/windows/desktop/aa379924(v=vs.85).aspx:
// Set the IV for the original key. Do not use the original key for
// encryption or decryption after doing this because the key's
// feedback register will get modified and you cannot change it.
CryptSetKeyParam(hOriginalKey, KP_IV, newIV)
while(block = NextBlock())
{
// Create a duplicate of the original key. This causes the
// original key's IV to be copied into the duplicate key's
// feedback register.
hDuplicateKey = CryptDuplicateKey(hOriginalKey)
// Encrypt the block with the duplicate key.
CryptEncrypt(hDuplicateKey, block)
// Destroy the duplicate key. Its feedback register has been
// modified by the CryptEncrypt function, so it cannot be used
// again. It will be re-duplicated in the next iteration of the
// loop.
CryptDestroyKey(hDuplicateKey)
}
Here's the updated code with the two new lines added:
HCRYPTKEY hDuplicateKey;
boolean final;
while (aesblocks.quot != 0)
{
CryptDuplicateKey(hOriginalKey, NULL, 0, &hDuplicateKey);
IV_len = IV_SIZE;
memmove(tIV, nonceIV, IV_len);
final = (aesblocks.quot == 1 && aesblocks.rem == 0) ? TRUE : FALSE;
CryptEncrypt(hDuplicateKey, 0, final, 0, (BYTE *)tIV, &IV_len, sizeof(tIV));
XOR(bufIn, tIV, AES_BLOCK_SIZE);
IncrementCounterByOne(nonceIV);
bufIn += AES_BLOCK_SIZE;
aesblocks.quot--;
CryptDestroyKey(hDuplicateKey);
}
if (aesblocks.rem != 0)
{
CryptDuplicateKey(hOriginalKey, NULL, 0, &hDuplicateKey);
final = TRUE;
memmove(tIV, nonceIV, IV_SIZE);
CryptEncrypt(hDuplicateKey, 0, final, 0, (BYTE *)tIV, &IV_len, sizeof(tIV));
XOR(bufIn, tIV, aesblocks.rem);
CryptDestroyKey(hDuplicateKey);
}
I'm not familiar with the Microsoft APIs, but I believe that CryptEncrypt() uses CBC mode by default - so the output from the first block of encryption is automatically being fed into the input for the second block. You are building CTR mode yourself form scratch (which incidentally is generally not an advisable thing to do - you should use the capabilities of crypto libraries rather than "roll your own" crypto). To get the expected output you probably need to get CryptEncrypt to use AES in ECB mode - which I believe can be done using CryptptSetKeyParam (https://msdn.microsoft.com/en-us/library/aa380272.aspx) and setting KP_MODE to CRYPT_MODE_ECB.
Make sure your input file doesn't contain any extra characters like new line etc. Openssl will include those extra characters while encrypting.

Creating a DER formatted ECDSA signature from raw r and s

I have a raw ECDSA signature: R and S values. I need a DER-encoded version of the signature. Is there a straightforward way to do this in openssl using the c interface?
My current attempt is to use i2d_ECDSA_SIG(const ECDSA_SIG *sig, unsigned char **pp) to populate an ECDSA_SIG*. The call returns non-zero but the target buffer doesn't seem to be changed.
I'm intiailly filling my ECDSA_SIG wtih r and s values. I don't see any errors. The man page says r and s should be allocated when I call ECDSA_SIG_new
ECDSA_SIG* ec_sig = ECDSA_SIG_new();
if (NULL == BN_bin2bn(sig, 32, (ec_sig->r))) {
dumpOpenSslErrors();
}
DBG("post r :%s\n", BN_bn2hex(ec_sig->r));
if (NULL == BN_bin2bn(sig + 32, 32, (ec_sig->s))) {
dumpOpenSslErrors();
}
DBG("post s :%s\n", BN_bn2hex(ec_sig->s));
S and R are now set:
post r :397116930C282D1FCB71166A2D06728120CF2EE5CF6CCD4E2D822E8E0AE24A30
post s :9E997D4718A7603942834FBDD22A4B856FC4083704EDE62033CF1A77CB9822A9
now to make the encoded signature.
int sig_size = i2d_ECDSA_SIG(ec_sig, NULL);
if (sig_size > 255) {
DBG("signature is too large wants %d\n", sig_size);
}
DBG("post i2d:%s\n", BN_bn2hex(ec_sig->s));
s hasn't changed:
post i2d:9E997D4718A7603942834FBDD22A4B856FC4083704EDE62033CF1A77CB9822A9
At this point I have more than enough bytes ready and I set the target to all 6s so it's easy to see what changes.
unsigned char* sig_bytes = new unsigned char[256];
memset(sig_bytes, 6, 256);
sig_size = i2d_ECDSA_SIG(ec_sig, (&sig_bytes));
DBG("New size %d\n", sig_size);
DBG("post i2d:%s\n", BN_bn2hex(ec_sig->s));
hexDump("Sig ", (const byte*)sig_bytes, sig_size);
The new size is 71
New size 71 and s iis stiill the same:
`post i2d:9E997D4718A7603942834FBDD22A4B856FC4083704EDE62033CF1A77CB9822A9`
The hex dump is all 6s.
--Sig --
0x06: 0x06: 0x06: 0x06: 0x06: 0x06: 0x06: 0x06:
0x06: ...
The dump is still all 6s even though the call didn't return 0. What am I missing tying to DER encode this raw signature?
i2d_ECDSA_SIG modifies its second argument, increasing it by the size of the signature. From ecdsa.h:
/** DER encode content of ECDSA_SIG object (note: this function modifies *pp
* (*pp += length of the DER encoded signature)).
* \param sig pointer to the ECDSA_SIG object
* \param pp pointer to a unsigned char pointer for the output or NULL
* \return the length of the DER encoded ECDSA_SIG object or 0
*/
int i2d_ECDSA_SIG(const ECDSA_SIG *sig, unsigned char **pp);
So you need to keep track of the original value of sig_bytes when you call i2d_ECDSA_SIG:
int sig_size = i2d_ECDSA_SIG(ec_sig, NULL);
unsigned char *sig_bytes = malloc(sig_size);
unsigned char *p;
memset(sig_bytes, 6, sig_size);
p = sig_bytes;
new_sig_size = i2d_ECDSA_SIG(_sig, &p);
// The value of p is now sig_bytes + sig_size, and the signature resides at sig_bytes
Output:
30 45 02 20 39 71 16 93 0C 28 2D 1F CB 71 16 6A
2D 06 72 81 20 CF 2E E5 CF 6C CD 4E 2D 82 2E 8E
0A E2 4A 30 02 21 00 9E 99 7D 47 18 A7 60 39 42
83 4F BD D2 2A 4B 85 6F C4 08 37 04 ED E6 20 33
CF 1A 77 CB 98 22 A9

Extracting data from struct sk_buff

I'm attempting to extract data from a struct sk_buff, but have not received the output I am expecting. The frame in question is 34 bytes; a 14-byte Ethernet header wrapped around an 8-byte (experimental protocol) header:
struct monitoring_hdr {
u8 version;
u8 type;
u8 reserved;
u8 haddr_len;
u32 clock;
} __packed;
After this header, there are two, variable-length hardware addresses (their lengths are dictated by the haddr_len field above). In the example here, they are both 6 bytes long.
The following code extracts the header (the struct) correctly, but not the two MAC addresses that follow.
Sender side:
...
skb = alloc_skb(mtu, GFP_ATOMIC);
if (unlikely(!skb))
return;
skb_reserve(skb, ll_hlen);
skb_reset_network_header(skb);
nwp = (struct monitoring_hdr *)skb_put(skb, hdr_len);
/* ... Set up fields in struct monitoring_hdr ... */
memcpy(skb_put(skb, dev->addr_len), src, dev->addr_len);
memcpy(skb_put(skb, dev->addr_len), dst, dev->addr_len);
...
Receiver side:
...
skb_reset_network_header(skb);
nwp = (struct monitoring_hdr *)skb_network_header(skb);
src = skb_pull(skb, nwp->haddr_len);
dst = skb_pull(skb, nwp->haddr_len);
...
Expected output:
I used tcpdump to capture the packet in question on the wire, and saw this (it was actually padded to 60 bytes by the sender's NIC, which I've omitted):
0000 | 00 90 f5 c6 44 5b 00 0e c6 89 04 2f c0 df 01 03
0010 | 00 06 d0 ba 8c 88 00 0e c6 89 04 2f 00 90 f5 c6
0020 | 44 5b
The first 14 bytes is the Ethernet header. The following 8 bytes (starting with 01 and ending with 88) should be the bytes put into the struct monitoring_hdr, which executes correctly. Then, I am expecting the following MAC addresses to be found:
src = 00 0e c6 89 04 2f
dst = 00 90 f5 c6 44 5b
Actual output:
However, the data that I receive is shifted two bytes to the left:
src = 8c 88 00 0e c6 89
dst = 04 2f 00 90 f5 c6
Can anyone see a logical flaw in above code? Or is there a better way to do this? I've also tried skb_pull in place of skb_network_header on the receiving side, but that resulted in a kernel panic.
Thanks in advance for any help.
SOLUTION:
The pointer to the first byte of the data in the sk_buff was not being pointed to by src as it should have been. I ended up using the following:
...
skb_reset_network_header(skb);
nwp = (struct monitoring_hdr *)skb_network_header(skb);
skb_pull(skb, offsetof(struct monitoring_hdr, haddrs_begin));
src = skb->data;
dst = skb_pull(skb, nwp->haddr_len);
...
Looking at the skbuff.h header, the functions you are using look like this:
static inline void skb_reset_network_header(struct sk_buff *skb)
{
skb->network_header = skb->data - skb->head;
}
static inline unsigned char *skb_network_header(const struct sk_buff *skb)
{
return skb->head + skb->network_header;
}
extern unsigned char *skb_pull(struct sk_buff *skb, unsigned int len);
static inline unsigned char *__skb_pull(struct sk_buff *skb, unsigned int len)
{
skb->len -= len;
BUG_ON(skb->len < skb->data_len);
return skb->data += len;
}
So first, I would try printing out skb->data and skb->head to make sure they are referencing the parts of the packet you expect them to. Since you are using a custom protocol here, perhaps there is a bug in the header processing code which is causing skb->data to be set incorrectly.
Also, looking at the definitions of sky_network_header and skb_pull makes me think perhaps you are using them incorrectly. Shouldn't the first 6-byte addr be at the location pointed to be the return value of skb_network_header()? It looks like that function adds the length of the header block to the head of the buffer, which should result in a pointer to your first data value.
Similarly, it looks like skb_pull() adds the length of the field you pass in and returns the pointer to the next byte. So you probably want something more like this:
src = skb_network_header(skb);
dst = skb_pull(skb, nwp->haddr_len);
I hope that helps. I'm sorry that this is not an exact answer.

Resources