Creating a DER formatted ECDSA signature from raw r and s - c

I have a raw ECDSA signature: R and S values. I need a DER-encoded version of the signature. Is there a straightforward way to do this in openssl using the c interface?
My current attempt is to use i2d_ECDSA_SIG(const ECDSA_SIG *sig, unsigned char **pp) to populate an ECDSA_SIG*. The call returns non-zero but the target buffer doesn't seem to be changed.
I'm intiailly filling my ECDSA_SIG wtih r and s values. I don't see any errors. The man page says r and s should be allocated when I call ECDSA_SIG_new
ECDSA_SIG* ec_sig = ECDSA_SIG_new();
if (NULL == BN_bin2bn(sig, 32, (ec_sig->r))) {
dumpOpenSslErrors();
}
DBG("post r :%s\n", BN_bn2hex(ec_sig->r));
if (NULL == BN_bin2bn(sig + 32, 32, (ec_sig->s))) {
dumpOpenSslErrors();
}
DBG("post s :%s\n", BN_bn2hex(ec_sig->s));
S and R are now set:
post r :397116930C282D1FCB71166A2D06728120CF2EE5CF6CCD4E2D822E8E0AE24A30
post s :9E997D4718A7603942834FBDD22A4B856FC4083704EDE62033CF1A77CB9822A9
now to make the encoded signature.
int sig_size = i2d_ECDSA_SIG(ec_sig, NULL);
if (sig_size > 255) {
DBG("signature is too large wants %d\n", sig_size);
}
DBG("post i2d:%s\n", BN_bn2hex(ec_sig->s));
s hasn't changed:
post i2d:9E997D4718A7603942834FBDD22A4B856FC4083704EDE62033CF1A77CB9822A9
At this point I have more than enough bytes ready and I set the target to all 6s so it's easy to see what changes.
unsigned char* sig_bytes = new unsigned char[256];
memset(sig_bytes, 6, 256);
sig_size = i2d_ECDSA_SIG(ec_sig, (&sig_bytes));
DBG("New size %d\n", sig_size);
DBG("post i2d:%s\n", BN_bn2hex(ec_sig->s));
hexDump("Sig ", (const byte*)sig_bytes, sig_size);
The new size is 71
New size 71 and s iis stiill the same:
`post i2d:9E997D4718A7603942834FBDD22A4B856FC4083704EDE62033CF1A77CB9822A9`
The hex dump is all 6s.
--Sig --
0x06: 0x06: 0x06: 0x06: 0x06: 0x06: 0x06: 0x06:
0x06: ...
The dump is still all 6s even though the call didn't return 0. What am I missing tying to DER encode this raw signature?

i2d_ECDSA_SIG modifies its second argument, increasing it by the size of the signature. From ecdsa.h:
/** DER encode content of ECDSA_SIG object (note: this function modifies *pp
* (*pp += length of the DER encoded signature)).
* \param sig pointer to the ECDSA_SIG object
* \param pp pointer to a unsigned char pointer for the output or NULL
* \return the length of the DER encoded ECDSA_SIG object or 0
*/
int i2d_ECDSA_SIG(const ECDSA_SIG *sig, unsigned char **pp);
So you need to keep track of the original value of sig_bytes when you call i2d_ECDSA_SIG:
int sig_size = i2d_ECDSA_SIG(ec_sig, NULL);
unsigned char *sig_bytes = malloc(sig_size);
unsigned char *p;
memset(sig_bytes, 6, sig_size);
p = sig_bytes;
new_sig_size = i2d_ECDSA_SIG(_sig, &p);
// The value of p is now sig_bytes + sig_size, and the signature resides at sig_bytes
Output:
30 45 02 20 39 71 16 93 0C 28 2D 1F CB 71 16 6A
2D 06 72 81 20 CF 2E E5 CF 6C CD 4E 2D 82 2E 8E
0A E2 4A 30 02 21 00 9E 99 7D 47 18 A7 60 39 42
83 4F BD D2 2A 4B 85 6F C4 08 37 04 ED E6 20 33
CF 1A 77 CB 98 22 A9

Related

Decoding oneof Nanopb

I am currently working with oneof properties. I am able to encode them without issue, however, decoding seems to be an issue. I fail to understand why it doesn't work.
My proto file looks like this:
syntax = "proto2";
message stringCallback{
required string name = 1;
required string surname = 2;
required int32 age = 3;
repeated sint32 values = 4;
repeated metrics metric_data = 5;
oneof payload {
int32 i_val = 6;
float f_val = 7;
string msg = 8;
}
}
message metrics{
required int32 id = 1;
required string type = 2;
required sint32 value = 3;
}
Whenever I send from a C# application a message containing in the payload an integer, there is no issue decoding. I get the right int value and which_payload value as well.
I then tried to send a float. This resulted in a value which was not corresponding to the value I send through C# (which makes sense because I use the "standard decoder" which means that it treats it as an int instead of a fixed32) I don't know how to tell the decoder to decode this value with the fixed32 option.
And finally sending a payload which contains a message resulted in an incorrect which_payload value (Type None: 0) and no corresponding message. Even though before the decode function I assigned a callback function to decode a string (works perfectly).
Additional info:
This is the byte array which I send to the Nanopb code containing a string in the payload:
0A 04 4B 65 65 73 12 07 76 61 6E 20 44 61 6D 18 34 20 02 20 04 20 06 20 08 20 0A 20 0C 20 0E 20 10 20 12 20 14 20 16 20 18 2A 0C 08 01 12 06 53 65 6E 73 6F 72 18 04 2A 0A 08 02 12 04 44 61 74 61 18 0A 2A 0E 08 03 12 08 57 69 72 65 6C 65 73 73 18 0E 2A 0D 08 04 12 07 54 65 73 74 69 6E 67 18 04 2A 10 08 05 12 0A 46 72 6F 6E 74 20 64 6F 6F 72 18 0A 2A 1B 08 06 12 15 54 68 69 73 20 69 73 20 61 20 72 61 6E 64 6F 6D 20 6E 61 6D 65 18 0E 42 10 48 65 6C 6C 6F 20 66 72 6F 6D 20 6F 6E 65 6F 66
I get from an online decoder the correct decoded variables but not on Nanopb. What am I supposed to do?
EDIT:
As per request, my decoder test function:
void test_decode(byte* payload, unsigned int length){
IntArray I_array = {0, 0};
MetricArray M_array = {0,0};
char name[MAX_STRING_LENGTH];
char surname[MAX_STRING_LENGTH];
char msg[MAX_STRING_LENGTH];
stringCallback message = stringCallback_init_zero;
message.name.funcs.decode = String_decode;
message.name.arg = &name;
message.surname.funcs.decode = String_decode;
message.surname.arg = &surname;
message.values.funcs.decode = IntArray_decode;
message.values.arg = &I_array;
message.metric_data.funcs.decode = MetricArray_decode;
message.metric_data.arg = &M_array;
message.payload.msg.arg = &msg;
message.payload.msg.funcs.decode = String_decode;
pb_istream_t istream = pb_istream_from_buffer(payload, length);
if(!pb_decode(&istream, stringCallback_fields, &message)){
Serial.println("Total decoding failed!");
return;
}
Serial.println(name);
Serial.println(surname);
Serial.println(message.age);
Serial.println(I_array.count);
Serial.println(M_array.count);
Serial.println();
MetricArray_print(&M_array);
Serial.println();
Serial.println("Oneof: ");
Serial.println(message.which_payload);
}
After some searching and testing I found a workaround!
I look at these examples, and figured that there are possibilities of having callbacks in oneof fields. There is a possibility to add a Nanopb option to the proto file which tells the message that it should generate a callback.
The option:
option (nanopb_msgopt).submsg_callback = true;
By adding this line to the proto file it will generate (to my understanding) a set-up callback. This callback gets called at the start of decoding the message. This gives us the opportunity to setup different callbacks for different fields in the oneof depending on the type we are decoding.
The problem before with oneof is that it was impossible to assign callback functions to the fields of the oneof type since they all share the same memory space. But by having this set-up callback we can actually have a look at what we are receiving and assign a callback accordingly.
However, there is a catch. I found out that this option will not work on anything other than messages in the oneof. This means no direct callback type can be assigned to a oneof field!
The workaround I used is to encapsulate the wanted callback in a message. This will trigger the set-up callback which then can be used to assign the correct callback function in the oneof.
EXAMPLE:
Lets take this proto file:
syntax = "proto2";
message callback{
required string name = 1;
required string surname = 2;
required int32 age = 3;
repeated sint32 values = 4;
repeated metrics metric_data = 5;
oneof payload {
int32 i_val = 6;
float f_val = 7;
string msg = 8;
payloadmsg p_msg = 9;
payloadmsg2 p_msg2 = 10;
}
}
message metrics{
required int32 id = 1;
required string type = 2;
required sint32 value = 3;
}
message payloadmsg{
required int32 id = 1;
required string type = 2;
required string msg = 3;
repeated sint32 values = 4;
}
message payloadmsg2{
required int32 id = 1;
required string type = 2;
required string msg = 3;
repeated sint32 values = 4;
}
I gave the strings in this proto file a maximum length which removed their callback type and exchanged it for a character array.
For testing purposes I didn't assign a size to the integer arrays in the payloadmsg messages.
The goal is to decode the arrays stored in the payloadmsg messages depending on the oneof type received (payloadmsg or payloadmsg2)
Then the set-up callback has to be created:
bool Oneof_decode(pb_istream_t *stream, const pb_field_t *field, void** arg){
Callback* msg = (Callback*)field->message;
switch(field->tag){
case stringCallback_p_msg_tag:{
Serial.println("Oneof type: p_msg detected!");
payloadmsg* p_message = (payloadmsg*)field->pData;
IntArray* array = (IntArray*)*arg;
p_message->values.arg = array;
p_message->values.funcs.decode = IntArray_decode;
break;
}
case stringCallback_p_msg2_tag:{
Serial.println("Oneof type: p_msg2 detected!");
payloadmsg2* p_message2 = (payloadmsg2*)field->pData;
IntArray* array = (IntArray*)*arg;
p_message2->values.arg = array;
p_message2->values.funcs.decode = IntArray_decode;
break;
}
}
return true;
}
We look at the tag we receive while decoding to see which type of callback has to be assigned. This is done by accessing the field. By using a switch case we can decide, according to the field tag, what decode callback function to assign. We pass the arguments and decoder functions like normal, after that everything is set-up correctly.
Finally we assign this callback set-up to the compiler generated variable called cb_payload. This variable will be added to your message struct when you put in the option in the protofile. This is how to assign the set-up callback:
message.cb_payload.arg = &I_array;
message.cb_payload.funcs.decode = Oneof_decode;
pb_istream_t istream = pb_istream_from_buffer(payload, length);
if(!pb_decode(&istream, stringCallback_fields, &message)){
Serial.println("Total decoding failed!");
return;
}
I pass my own IntArray struct to the cb_payload function as an argument to then pass it on to the correctly assigned decoder.
The result is that whenever I decode payloadmsg or payloadmsg2 the correct decoder gets assigned and the right values are being decoded into the structs.

OpenSSL & C - Hash Passwords w/ SHA256 or SHA512

I've tried my best reading over the docs but they seem very sparing in information (maybe I'm looking in the wrong place?)
I'm trying to create a password hasher in C using OpenSSL lib in which the program can be called and passed arguments such as the ending length of the hashed password, salt length, and the HMAC used (SHA256 or 512). There just isn't a lot of info on how to utilize the API to do this.
The biggest problem I see is that there is a function called PKCS5_PBKDF2_HMAC_SHA1, but I can't find one similar for 256 or 512.. Is only SHA1 available via OpenSSL API?
Any guidance is much appreciated.
You can use PKCS5_PBKDF2_HMAC, which allows you to target a specific digest algorithm.
int PKCS5_PBKDF2_HMAC(const char *pass, int passlen,
const unsigned char *salt, int saltlen,
int iter, const EVP_MD *digest, // <<==== HERE
int keylen, unsigned char *out);
A simple example appears below, which generates a random salt, then creates a PBK from "password", the generated salt, and EVP_sha256()
#include <openssl/evp.h>
#include <openssl/rand.h>
#include <openssl/bio.h>
int main(int argc, char *argv[])
{
int iter = 1007;
unsigned char salt[32] = {0};
RAND_bytes(salt, sizeof(salt));
unsigned char key[32] = {0};
PKCS5_PBKDF2_HMAC("password", 8,
salt, sizeof(salt),
iter, EVP_sha256(),
sizeof(key), key);
BIO *bio = BIO_new_fp(stdout, BIO_NOCLOSE);
BIO_dump(bio, (const char*)salt, sizeof(salt));
BIO_dump(bio, (const char*)key, sizeof(key));
BIO_free(bio);
}
Output (varies)
0000 - a7 ca ac f4 43 b0 2d 48-2b f6 d5 67 7e d2 5c b4 ....C.-H+..g~.\.
0010 - c5 82 1d 4d b1 00 cd 1e-85 91 77 4c 32 3e f3 c8 ...M......wL2>..
0000 - 48 8f be 5a e9 1c 9e 11-d8 95 cb ed 6d 6f 36 a2 H..Z........mo6.
0010 - 38 e6 db 95 e1 d7 a6 c0-8a 2f 3a f6 e1 74 e9 b9 8......../:..t..

CTR-AES256 Encrypt does not match OpenSSL -aes-256-ctr

My problem is that I cannot get the AES 256 CTR output from the C code below to match the output from the OpenSSL command below.
The C code produces this:
5f b7 18 d1 28 62 7f 50 35 ba e9 67 a7 17 ab 22
f9 e4 09 ce 23 26 7b 93 82 02 d3 87 eb 01 26 ac
96 2c 01 8c c8 af f3 de a4 18 7f 29 46 00 2e 00
The OpenSSL command line produces this:
5f b7 18 d1 28 62 7f 50 35 ba e9 67 a7 17 ab 22
3c 01 11 bd 39 14 74 76 31 57 a6 53 f9 00 09 b4
6f a9 49 bc 6d 00 77 24 2d ef b9 c4
Notice the first 16 bytes are the same because the nonceIV was the same, however, when the nonceIV is updated on the next iteration, then XOR'd with the plaintext, the next 16 bytes differ and so on...?
I cannot understand why that happens? Anyone know why the hex codes are different after the first 16 byte chunk?
Disclaimer: I'm no C expert.
Thanks!!
Fox.txt
The quick brown fox jumped over the lazy dog
Then run the following OpenSSL command to create foxy.exe
openssl enc -aes-256-ctr -in fox.txt -out foxy.exe -K 603deb1015ca71be2b73aef0857d77811f352c073b6108d72d9810a30914dff4 -iv f0f1f2f3f4f5f6f7f8f9fafbfcfdfeff -nosalt -nopad -p
Here's what foxy.exe contains:
5f b7 18 d1 28 62 7f 50 35 ba e9 67 a7 17 ab 22
3c 01 11 bd 39 14 74 76 31 57 a6 53 f9 00 09 b4
6f a9 49 bc 6d 00 77 24 2d ef b9 c4
Here's the code.
#include <Windows.h>
// What is AES CTR
//
// AES - CTR (counter) mode is another popular symmetric encryption algorithm.
//
// It is advantageous because of a few features :
// 1. The data size does not have to be multiple of 16 bytes.
// 2. The encryption or decryption for all blocks of the data can happen in parallel, allowing faster implementation.
// 3. Encryption and decryption use identical implementation.
//
// Very important note : choice of initial counter is critical to the security of CTR mode.
// The requirement is that the same counter and AES key combination can never to used to encrypt more than more one 16 - byte block.
// Notes
// -----
// * CTR mode does not require padding to block boundaries.
//
// * The IV size of AES is 16 bytes.
//
// * CTR mode doesn't need separate encrypt and decrypt method. Encryption key can be set once.
//
// * AES is a block cipher : it takes as input a 16 byte plaintext block,
// a secret key (16, 24 or 32 bytes) and outputs another 16 byte ciphertext block.
//
// References
// ----------
// https://en.wikipedia.org/wiki/Block_cipher_mode_of_operation#Counter_.28CTR.29
// https://www.cryptopp.com/wiki/CTR_Mode#Counter_Increment
// https://modexp.wordpress.com/2016/03/10/windows-ctr-mode-with-crypto-api/
// https://msdn.microsoft.com/en-us/library/windows/desktop/jj650836(v=vs.85).aspx
// http://www.cryptogrium.com/aes-ctr.html
// http://www.bierkandt.org/encryption/symmetric_encryption.php
#define IV_SIZE 16
#define AES_BLOCK_SIZE 16
typedef struct _key_hdr_t {
PUBLICKEYSTRUC hdr; // Indicates the type of BLOB and the algorithm that the key uses.
DWORD len; // The size, in bytes, of the key material.
char key[32]; // The key material.
} key_hdr;
// NIST specifies two types of counters.
//
// First is a counter which is made up of a nonce and counter.
// The nonce is random, and the remaining bytes are counter bytes (which are incremented).
// For example, a 16 byte block cipher might use the high 8 bytes as a nonce, and the low 8 bytes as a counter.
//
// Second is a counter block, where all bytes are counter bytes and can be incremented as carries are generated.
// For example, in a 16 byte block cipher, all 16 bytes are counter bytes.
//
// This uses the second method, which means the entire byte block is treated as counter bytes.
void IncrementCounterByOne(char *inout)
{
int i;
for (i = 16 - 1; i >= 0; i--) {
inout[i]++;
if (inout[i]) {
break;
}
}
}
void XOR(char *plaintext, char *ciphertext, int plaintext_len)
{
int i;
for (i = 0; i < plaintext_len; i++)
{
plaintext[i] ^= ciphertext[i];
}
}
unsigned int GetAlgorithmIdentifier(unsigned int aeskeylenbits)
{
switch (aeskeylenbits)
{
case 128:
return CALG_AES_128;
case 192:
return CALG_AES_192;
case 256:
return CALG_AES_256;
default:
return 0;
}
}
unsigned int GetKeyLengthBytes(unsigned int aeskeylenbits)
{
return aeskeylenbits / 8;
}
void SetKeyData(key_hdr *key, unsigned int aeskeylenbits, char *pKey)
{
key->hdr.bType = PLAINTEXTKEYBLOB;
key->hdr.bVersion = CUR_BLOB_VERSION;
key->hdr.reserved = 0;
key->hdr.aiKeyAlg = GetAlgorithmIdentifier(aeskeylenbits);
key->len = GetKeyLengthBytes(aeskeylenbits);
memmove(key->key, pKey, key->len);
}
// point = pointer to the start of the plaintext, extent is the size (44 bytes)
void __stdcall AESCTR(char *point, int extent, char *pKey, char *pIV, unsigned int aeskeylenbits, char *bufOut)
{
HCRYPTPROV hProv;
HCRYPTKEY hSession;
key_hdr key;
DWORD IV_len;
div_t aesblocks;
char nonceIV[64];
char tIV[64];
char *bufIn;
bufIn = point;
memmove(nonceIV, pIV, IV_SIZE);
SetKeyData(&key, aeskeylenbits, pKey);
CryptAcquireContext(&hProv, NULL, NULL, PROV_RSA_AES, CRYPT_VERIFYCONTEXT | CRYPT_SILENT);
CryptImportKey(hProv, (PBYTE)&key, sizeof(key), 0, CRYPT_NO_SALT, &hSession);
aesblocks = div(extent, AES_BLOCK_SIZE);
while (aesblocks.quot != 0)
{
IV_len = IV_SIZE;
memmove(tIV, nonceIV, IV_SIZE);
CryptEncrypt(hSession, 0, FALSE, 0, (BYTE *)tIV, &IV_len, sizeof(tIV));
XOR(bufIn, tIV, AES_BLOCK_SIZE);
IncrementCounterByOne(nonceIV);
bufIn += AES_BLOCK_SIZE;
aesblocks.quot--;
}
if (aesblocks.rem != 0)
{
memmove(tIV, nonceIV, IV_SIZE);
CryptEncrypt(hSession, 0, TRUE, 0, (BYTE *)tIV, &IV_len, sizeof(tIV));
XOR(bufIn, tIV, aesblocks.rem);
}
memmove(bufOut, point, extent);
CryptDestroyKey(hSession);
CryptReleaseContext(hProv, 0);
}
I was able to get this working by the suggested pseudocode on the M$ CryptEncrypt() remarks section https://msdn.microsoft.com/en-us/library/windows/desktop/aa379924(v=vs.85).aspx:
// Set the IV for the original key. Do not use the original key for
// encryption or decryption after doing this because the key's
// feedback register will get modified and you cannot change it.
CryptSetKeyParam(hOriginalKey, KP_IV, newIV)
while(block = NextBlock())
{
// Create a duplicate of the original key. This causes the
// original key's IV to be copied into the duplicate key's
// feedback register.
hDuplicateKey = CryptDuplicateKey(hOriginalKey)
// Encrypt the block with the duplicate key.
CryptEncrypt(hDuplicateKey, block)
// Destroy the duplicate key. Its feedback register has been
// modified by the CryptEncrypt function, so it cannot be used
// again. It will be re-duplicated in the next iteration of the
// loop.
CryptDestroyKey(hDuplicateKey)
}
Here's the updated code with the two new lines added:
HCRYPTKEY hDuplicateKey;
boolean final;
while (aesblocks.quot != 0)
{
CryptDuplicateKey(hOriginalKey, NULL, 0, &hDuplicateKey);
IV_len = IV_SIZE;
memmove(tIV, nonceIV, IV_len);
final = (aesblocks.quot == 1 && aesblocks.rem == 0) ? TRUE : FALSE;
CryptEncrypt(hDuplicateKey, 0, final, 0, (BYTE *)tIV, &IV_len, sizeof(tIV));
XOR(bufIn, tIV, AES_BLOCK_SIZE);
IncrementCounterByOne(nonceIV);
bufIn += AES_BLOCK_SIZE;
aesblocks.quot--;
CryptDestroyKey(hDuplicateKey);
}
if (aesblocks.rem != 0)
{
CryptDuplicateKey(hOriginalKey, NULL, 0, &hDuplicateKey);
final = TRUE;
memmove(tIV, nonceIV, IV_SIZE);
CryptEncrypt(hDuplicateKey, 0, final, 0, (BYTE *)tIV, &IV_len, sizeof(tIV));
XOR(bufIn, tIV, aesblocks.rem);
CryptDestroyKey(hDuplicateKey);
}
I'm not familiar with the Microsoft APIs, but I believe that CryptEncrypt() uses CBC mode by default - so the output from the first block of encryption is automatically being fed into the input for the second block. You are building CTR mode yourself form scratch (which incidentally is generally not an advisable thing to do - you should use the capabilities of crypto libraries rather than "roll your own" crypto). To get the expected output you probably need to get CryptEncrypt to use AES in ECB mode - which I believe can be done using CryptptSetKeyParam (https://msdn.microsoft.com/en-us/library/aa380272.aspx) and setting KP_MODE to CRYPT_MODE_ECB.
Make sure your input file doesn't contain any extra characters like new line etc. Openssl will include those extra characters while encrypting.

Breaking down raw byte string with particular structure gives wrong data

I am working with one ZigBee module based on 32-bit ARM Cortex-M3. But my question is not related to ZigBee protocol itself. I have access to the source code of application layer only which should be enough for my purposes. Lower layer (APS) passes data to application layer within APSDE-DATA.indication primitive to the following application function:
void zbpro_dataRcvdHandler(zbpro_dataInd_t *data)
{
DEBUG_PRINT(DBG_APP,"\n[APSDE-DATA.indication]\r\n");
/* Output of raw bytes string for further investigation.
* Real length is unknown, 50 is approximation.
*/
DEBUG_PRINT(DBG_APP,"Raw data: \n");
DEBUG_PRINT(DBG_APP,"----------\n");
for (int i = 0; i < 50; i++){
DEBUG_PRINT(DBG_APP,"%02x ",*((uint8_t*)data+i));
}
DEBUG_PRINT(DBG_APP,"\n");
/* Output of APSDE-DATA.indication primitive field by field */
DEBUG_PRINT(DBG_APP,"Field by field: \n");
DEBUG_PRINT(DBG_APP,"----------------\n");
DEBUG_PRINT(DBG_APP,"Destination address: ");
for (int i = 0; i < 8; i++)
DEBUG_PRINT(DBG_APP,"%02x ",*((uint8_t*)data->dstAddress.ieeeAddr[i]));
DEBUG_PRINT(DBG_APP,"\n");
DEBUG_PRINT(DBG_APP,"Destination address mode: 0x%02x\r\n",*((uint8_t*)data->dstAddrMode));
DEBUG_PRINT(DBG_APP,"Destination endpoint: 0x%02x\r\n",*((uint8_t*)data->dstEndPoint));
DEBUG_PRINT(DBG_APP,"Source address mode: 0x%02x\r\n",*((uint8_t*)data->dstAddrMode));
DEBUG_PRINT(DBG_APP,"Source address: ");
for (int i = 0; i < 8; i++)
DEBUG_PRINT(DBG_APP,"%02x ",*((uint8_t*)data->srcAddress.ieeeAddr[i]));
DEBUG_PRINT(DBG_APP,"\n");
DEBUG_PRINT(DBG_APP,"Source endpoint: 0x%02x\r\n",*((uint8_t*)data->srcEndPoint));
DEBUG_PRINT(DBG_APP,"Profile Id: 0x%04x\r\n",*((uint16_t*)data->profileId));
DEBUG_PRINT(DBG_APP,"Cluster Id: 0x%04x\r\n",*((uint16_t*)data->clusterId));
DEBUG_PRINT(DBG_APP,"Message length: 0x%02x\r\n",*((uint8_t*)data->messageLength));
DEBUG_PRINT(DBG_APP,"Flags: 0x%02x\r\n",*((uint8_t*)data->flags));
DEBUG_PRINT(DBG_APP,"Security status: 0x%02x\r\n",*((uint8_t*)data->securityStatus));
DEBUG_PRINT(DBG_APP,"Link quality: 0x%02x\r\n",*((uint8_t*)data->linkQuality));
DEBUG_PRINT(DBG_APP,"Source MAC Address: 0x%04x\r\n",*((uint16_t*)data->messageLength));
DEBUG_PRINT(DBG_APP,"Message: ");
for (int i = 0; i < 13; i++){
DEBUG_PRINT(DBG_APP,"%02x ",*((uint8_t*)data->messageContents+i));
}
DEBUG_PRINT(DBG_APP,"\n");
bufm_deallocateBuffer((uint8_t *)data, CORE_MEM);
}
APSDE-DATA.indication primitive is implemented by following structures:
/**
* #brief type definition for address (union of short address and extended address)
*/
typedef union zbpro_address_tag {
uint16_t shortAddr;
uint8_t ieeeAddr[8];
} zbpro_address_t;
/**
* #brief apsde data indication structure
*/
PACKED struct zbpro_dataInd_tag {
zbpro_address_t dstAddress;
uint8_t dstAddrMode;
uint8_t dstEndPoint;
uint8_t srcAddrMode;
zbpro_address_t srcAddress;
uint8_t srcEndPoint;
uint16_t profileId;
uint16_t clusterId;
uint8_t messageLength;
uint8_t flags; /* bit0: broadcast or not; bit1: need aps ack or not; bit2: nwk key used; bit3: aps link key used */
uint8_t securityStatus; /* not-used, reserved for future */
uint8_t linkQuality;
uint16_t src_mac_addr;
uint8_t messageContents[1];
};
typedef PACKED struct zbpro_dataInd_tag zbpro_dataInd_t;
As a result I receive next:
[APSDE-DATA.indication]
Raw data:
---------
00 00 00 72 4c 19 40 00 02 e8 03 c2 30 02 fe ff 83 0a 00 e8 05 c1 11 00 11 08 58 40 72 4c ae 53 4d 3f 63 9f d8 51 da ca 87 a9 0b b3 7b 04 68 ca 87 a9
Field by field:
---------------
Destination address: 00 00 00 28 fa 44 34 00
Destination address mode: 0x12
Destination endpoint: 0xc2
Source address mode: 0x12
Source address: 13 01 12 07 02 bd 02 00
Source endpoint: 0xc2
Profile Id: 0xc940
Cluster Id: 0x90a0
Message length: 0x00
Flags: 0x00
Security status: 0x04
Link quality: 0x34
Source MAC Address: 0x90a0
Message: ae 53 4d 3f 63 9f d8 51 da ca 87 a9 0b
From this output I can see that while raw string has some expected values, dispatched fields are totally different. What is the reason of this behavior and how to fix it? Is it somehow related to ARM architecture or wrong type casting?
I don't have access to implementation of DEBUG_PRINT, but we can assume that it works properly.
There's no need to dereference in your DEBUG_PRINT statements, for example
DEBUG_PRINT(DBG_APP,"%02x ",*((uint8_t*)data->dstAddress.ieeeAddr[i]));
should be simply
DEBUG_PRINT(DBG_APP,"%02x ", data->dstAddress.ieeeAddr[i]);
so on and so forth...
Consider this code:
DEBUG_PRINT(DBG_APP,"%02x ",*((uint8_t*)data->dstAddress.ieeeAddr[i]));
Array subscripting and direct and indirect member access have higher precedence than does casting, so the third argument is equivalent to
*( (uint8_t*) (data->dstAddress.ieeeAddr[i]) )
But data->dstAddress.ieeeAddr[i] is not a pointer, it is an uint8_t. C permits you to convert it to a pointer by casting, but the result is not a pointer to the value, but rather a pointer interpretation of the value. Dereferencing it produces undefined behavior.
Similar applies to your other DEBUG_PRINT() calls.

Accessing individual bytes in an array

I'm trying to access individual bytes of a wide-char array so that I can send it via winsock, and this is what I've got so far:
WCHAR* buffer_in_bytes = (WCHAR*)msc->wcArray;
unsigned char l;
for (unsigned int i = 0; i <= (msc->bSize*2); i++ )
{
l = (unsigned char)(*(buffer_in_bytes +i));
char s[256] ;
_itoa(l,s,16);
OutputDebugString(s);
}
They array contains a series of a(s) (aaaaaaaaaaaaaaaaaaaa....), and I would expect to see 00 61 00 61 00 61 as a result I get 61 61 61 61 61 61
Any ideas why?
Each element contains an 'a', or ASCII 61, which is what you see printed. I don't know why you would expect to see these interspersed with 0's.

Resources