SHA256 digest differs between array initializer and string - c

I use the sha256 function of Microchip ATECC508A security chip. My code looks like this:
int main(void) {
uint8_t message[32] = {0}; // Method 1
//uint8_t message[32] = "00000000000000000000000000000000"; // Method 2
foobar(message, sizeof(message));
}
void foobar(uint8_t *message, int length) {
uint8_t digest[32];
sha256(message, length, digest);
// printf statements for calculated hashes ...
}
Method 1: 66687AADF862BD776C8FC18B8E9F8E20089714856EE233B3902A591D0D5F2925
Method 2: 84E0C0EAFAA95A34C293F278AC52E45CE537BAB5E752A00E6959A13AE103B65A
Method 2 delievers the corresponding hash I expect for 32 zeros, but when I use the static array initializer in method 1, the hash is wrong and I don't know why. I've checked the resulting sha256 hashes here.
I would appreciate any help, thank you very much.
EDIT:
I was able to to initialize the whole array with characters of zeros with:
uint8_t message[32] = { [0 ... 31] = '0'}.
This only works on gcc compilers.

In the second case, the array is not filled with the number 0 but is filled with the character '0'.
In ASCII, the encoding for '0' is 48, so assuming your system uses ASCII then every element of your array has the value 48.

Related

Conversion of decimal to string

I'm trying to convert a decimal value into a string. I use the STM32CubeIDE IDE but am getting an error like 'Invalid binary operator'. I'm able to set &n to the decimal value 3695. I need to convert that into a string. How would I do that?
void main()
{
uint8_t TxArr;
uint16_t Data;
int a[10];
int i;
while (1)
{
HAL_I2C_Master_Transmit(&hi2c1,0x16, &TxArr, 1, 1000);
HAL_I2C_Master_Receive(&hi2c1, 0x17, &Data, 2, 1000);
for(i=0;i<4;i++)
{
a[i]=Data%10+0x30; //value in Data is 3695.
Data=Data/10;
HAL_UART_Transmit(&huart3, a[i], 11, 100);
HAL_Delay(300);
}
}
The error is produced by:
n = n / 10;
What this would do if n was a pointer (int* n) is that it would repoint n to a location one tenth the original number. You will need to begin using normal array operators if you want your code to work properly.
The second problem you have is the following:
You declare n as a array of 10 integers of type and with int.
int n[10];
Then in your for loop you try to do this:
n > 0;
This is an invalid operation as n wil decay into a pointer and as such the address of your array is compared to 0. This will always evaluate as TRUE!
A good way to convert a integer to a string (char array) is this answer.
In your sitaution that would be
int yourToBeConvertedNumber;
char str[INT_MAX]; // or any other reasonable upper bound you have set for the input data.
snprintf(str, sizeof(str), "%d", yourToBeConvertedNumber);
Generally speaking, to convert an integer to a string, you can use the sprintf function. It should be available in newliband even newlib-nano standard C libraries.
However my guess here is that you have an array of integers where each element is a number between 0 and 9 ?
If this is true you have several issues: you seem to handle the variable n like an integer and not an address. Also your string should be one element longer and composed of chars.
You may do something like this:
char a[11];
for(i=0;i<10;i++)
{
a[i]=n[i]%10 +'0';
}
a[11]='\0';

Compute the crc16 of a bytearray / userdata in lua

I am writing a Wireshark protocol dissector in lua. The protocol it parses contains a crc16 checksum. The dissector should check whether the crc is correct.
I have found a crc16 implementation written in C already with the lua wrapper code here. I have successfully compiled it and run it (e.g. crc16.compute("test")). The problem is it expects a string as input. From wireshark, I get a buffer that seems to be of lua type userdata. So when I do
crc16.compute(buffer(5, 19))
Lua complains bad argument #1 to compute (string expected, got userdata).
compute() in the crc16 implementation looks like this:
static int compute(lua_State *L)
{
const char *data;
size_t len = 0;
unsigned short r, crc = 0;
data = luaL_checklstring(L, 1, &len);
for ( ; len > 0; len--)
{
r = (unsigned short)(crc >> 8);
crc <<= 8;
crc ^= crc_table[r ^ *data];
data ++;
}
lua_pushinteger(L, crc);
return 1;
}
It seems luaL_checklstring fails. So I guess I would either need to convert the input into a lua string, which I am not sure it works, as not all bytes of my input are necessarily printable characters. Or I would need to adjust the above code so it accepts input of type userdata. I found lua_touserdata(), but this seems to return something like a pointer. So I would need a second argument for the length, right?
I don't necessarily need to use this implementation. Any crc16 implementation for lua that accepts userdata would perfectly solve the problem.
The buffer that you get from wireshark can be used as a ByteArray like this:
byte_array = Buffer(5,19):bytes();
ByteArray has a _toString function that converts the bytes into a string representation of the bytes represented as hex. So you can call the crc function like this:
crc16.compute(tostring(byte_array))
'Representation of the bytes represented as hex' means an input byte with the bits 11111111 will turn into the ASCII string FF. The ASCII string FF is 01000110 01000110 in bits or 46 46in hex. This means what you get in C, is not the original bytearray. You need to decode the ascii representation back into the original bytes before computing the crc, otherwise we will obviously get a different crc.
First, this function converts a single character c containing one ascii hex character back into the value it represents:
static char ascii2char(char c) {
c = tolower(c);
if(c >= '0' && c <= '9')
return c - '0';
else if(c >= 'a' && c <= 'f')
return c - 'a' + 10;
}
Now in the compute function we loop through the string representation, always combining two characters into one byte.
int compute(lua_State *L) {
size_t len;
const char * str = lua_tolstring(L, 1, &len);
uint8_t * data = (uint8_t *) malloc(len/2);
for(int n=0; n<len/2; n++) {
data[n] = ascii2char(str[2*n]) << 4;
data[n] |= ascii2char(str[2*n+1]);
}
crc16_t crc = crc16_init();
crc = crc16_update(crc, data, len/2);
crc = crc16_finalize(crc);
lua_pushinteger(L, crc);
free(data);
return 1;
}
In this example, I used the crc functions crc16_init, crc16_update and crc16_finalize generated using pycrc, not the crc implementation linked in the question. The problem is that you need to use the same polynom etc. as when generating the crc. Pycrc allows you the generate crc functions as needed.
My packets also contain a crc32. Pycrc can also generate code for crc32, so it works all the same way for crc32.
Christopher K outlines what is mostly the correct answer, but the conversion of hex values back into bytes seemed a little like hardwork, but this got me looking as I was searching for something like this.
The trick missed was that as well as calling the function with a buffer:bytes() you can also call
buffer:raw()
This provides exactly what is needed: a simple TSTRING that can be parsed directly without the need to do ascii conversions that would, I imagine, add significantly to the load in the C code.

unsigned char table length

I declared a table of unsigned char as follow:
unsigned char buf[10]={'1','5',0x00,'8'};
in order to know the number of elements of this table i implemented this function
int tablength(unsigned char *buf)
{
int i=0;
for (i=0;buf[i];i++)
;
return i;
}
However this function don't give me the right result when the buffer contains 0 in the middle .Sizeof don't give me the right result since it returns 10 in this case i can't neither use strlen since this is a table of unsigned char.
Do you have any idea to improve this function or any predefined function that help me solve my problem (the result that i 'am waiting for is 4)
Technically, since you declared a statically allocated array of 10 elements, the size of the array is 10. Even though you may not have initialized every element, there is something filling that space. C++ cannot determine whether the value in the array means anything or not.
After
unsigned char x[10] = {1, 2, 3};
the variable x (an array) has 10 elements, the first three initialized to 1, 2 and 3 and all the others initialized to 0. In other words that definition is absolutely identical to
unsigned char x[10] = {1, 2, 3, 0, 0, 0, 0, 0, 0, 0};
An array in C and C++ is just a fixed area of memory and doesn't include a counter of how many "interesting" elements are there.
If you are looking for a container with a variable number of elements consider instead std::vector (C++ only). With that std::vector::size() returns the current number of elements in the container.
If you need the array to contain exactly the number of elements you've specified, just declare it without a specific size:
unsigned char buf[]={'1','5',0x00,'8'};
cout << sizeof(buf); // should be 4
If you want to store a variable amount of data (in C++) use std::vector instead of an array.
Otherwise you'll need to keep track of the number of valid elements yourself. There's nothing in the language that will do it for you.
Compilers cannot know how you would like to use an array instance.
Therefore you must follow the language's semantics. By declaring your array globally or locally, but with the storage class specifier static you are initializing every element to 0 on default and your function will work.
0x00 is false. 0x00 (which same as 0x0) is a hex number representing 0 (false). This is where your counting loop will stop at - the 3rd element.
Another thing you can do is declare your array with non-fixed size.
unsigned char buf[]={'1','5',0x00,'8'};
In that case, the sizeof operator works as expected.
Because that way, you will have an array of 4 elements.
strlen() obviously won't work as it is designed to work with strings, not a buffer.
As for a function that counts on a smarter way:
size_t arrcnt (unsigned char source[], size_t size)
{
size_t i;
for(i = size; i >= 0 && !source[i]; i--);
return i + 1;
}
Usage:
printf("size of buf: %u", arrcnt(buf, sizeof(buf));
buf[i] evaluates to false when buf[i] contains 0.
You cannot do what you want unless you know one value which can never occur in your array between 0 to UCHAR_MAX (255). Say the value is 255, then you first preinitialize the full array to 255 before you start filling it up.
memset(buf, 255, sizeof(char) * sizeof(buf));
Then you fill other elements like you want and then you can use the following
for(i = 0; buf[i] != 255, ++i)
try putting :
for(int i=0;buf[i]!= 0;i++)
count++;
return count;

converting data types in c

Let me start by saying that I openly admit this is for a homework assignment, but what I am asking is not related to the purpose of the assignment, just something I don't understand in C. This is just a very small part of a large program.
So my issue is, I have a set of data that consists various data types as follows:
[16 bit number][16 but number][16 bit number][char[234]][128 bit number]
where each block represents a variable from elsewhere in the program.
I need to send that data 8bytes at a time into a function that accepts uint32_t[2] as an input. How do I convert my 234byte char array into uint32_t without losing the char values?
In other words, I need to be able to convert back from the uint32_t version to the original char array later on. I know a char is 1byte, and the value can also be represented as a number in relation to its ascii value, but not sure how to convert between the two since some letters have a 3 digit ascii value and others have 2.
I tried to use sprintf to grab 8byte blocks from the data set, and store that value in a uint32_t[2] variable. It works, but then I lose the original char array because I can't figure out way to go back/undo it.
I know there has to be a relatively simple way to do this, i'm just lacking enough skill in C to make it happen.
Your question is very confusing, but I am guessing you are preparing some data structure for encryption by a function that requires 8 bytes or 2 uint32_t's.
You can convert a char array to uint32_t as follows
#define NELEM 234
char a[NELEM];
uint64_t b[(NELEM+sizeof(uint64_t)-1)/sizeof(uint64_t)]; // this rounds up to nearest modulo 4
memcpy(b,a,NELEM);
for(i .. ) {
encryption_thing(b[i]);
}
or
If you need to change endianess or something, that is more complicated.
#include <stdint.h>
void f(uint32_t a[2]) {}
int main() {
char data[234]; /* GCC can explicitly align with this: __attribute__ ((aligned (8))) */
int i = 0;
int stride = 8;
for (; i < 234 - stride; i += stride) {
f((uint32_t*)&data[i]); }
return 0; }
I need to send that data 8bytes at a time into a function that accepts
uint32_t[2] as an input. How do I convert my 234byte char array into
uint32_t without losing the char values?
you could use a union for this
typedef union
{
unsigned char arr[128]; // use unsigned char
uint32_t uints[16]; // 128/8
} myvaluetype;
myvaluetype value;
memcpy(value.arr, your_array, sizeof(value.arr));
say the prototype that you want to feed 2 uint32_t at a time is something like
foo(uint32_t* p);
you can now send the data 8 bytes at the time by
for (int i = 0; i < 16; i += 2)
{
foo(myvaluetype.uints + i);
}
then use the same struct to convert back.
of course some care must be taken about padding/alignment you also don't mention if it is sent over a network etc so there are other factors to consider.

invalid conversion from 'char' to 'int* in C

I have the following arrays:
int A[] = {0,1,1,1,1, 1,0,1,0,0, 0,1,1,1,1};
int B[] = {1,1,1,1,1, 1,0,1,0,1, 0,1,0,1,0};
int C[] = {0,1,1,1,0, 1,0,0,0,1, 1,0,0,0,1};
//etc... for all letters of the alphabet
And a function that prints the letters on a 5x3 LED matrix:
void printLetter(int letter[])
I have a string of letters:
char word[] = "STACKOVERFLOW";
and I want to pass each character of the string to the printLetter function.
I tried:
int n = sizeof(word);
for (int i = 0; i < n-1; i++) {
printLetter(word[i]);
}
But I get the following error: invalid conversion from 'char' to 'int*'
What should i be doing?
Thanks!!
Behind the parameter type error there is a deeper issue: you lack the mapping between a char and the corresponding int[].
Redefining printLetter as
void printLetter(char letter)
satisfies the compiler, but doesn't solve your problem per se. Whether inside or outside printLetter, you need to get the corresponding int[] for a given char.
A simple brute-force way to achieve this would be to use a switch, but a better way is to use a second array, i.e. something like this:
void printLetter(char letter) {
static int* charToMatrix[] = { A, B, C, ... };
int* matrixToPrint = charToMatrix[letter - 'A'];
// print the matrix
}
Note that this is an example - I don't have access to a C compiler right now, so I can't guarantee it works straight away, but hopefully it illustrates the point well enough. It also lacks bounds checking, so it accesses memory in strange random places, possibly resulting in a crash, if you attempt to print an "unknown" character.
This solution is supposed to work for the uppercase letters; if you need to print lowercase letters or other characters as well, you might prefer going with an array of 256 elements, where only the elements at indexes corresponding to the "known" matrices are filled, and the rest is set to NULL.
You cannot so easly transform the 'a' from "stackoverflow" to the A - the array of ints. You can define all the arrays that represent a letter in one single letter and get them by the convertion of your letter to int.
What you need to do is convert from a character to one of your arrays. So when you have the letter 'A', you'll want to use array A. The easiest way to do this is via lookup table.
int *lookup[256]; // assuming ASCII
memset(lookup, 0, sizeof(lookup));
lookup['A'] = A;
lookup['B'] = B;
...
Then when you have a character, you can get the proper array:
void printletter(char c);
{
int *data = lookup((unsigned char)c);
// In case you get a letter that you don't know how to display
if (data != NULL)
{
// display with data
}
}
Instead of building up your array at runtime, you can also build up your array at compile time although it will be a bit harder as you will need to manually put in NULL pointers.
int *lookup[256] = {
NULL, // you need a total of 65 NULL's
NULL,
...
A, // so this is at the correct position
B,
C,
...
};
The function is declared as void printLetter(int letter[]), which means it takes a pointer to an array of ints. On the other hand, word is an array of chars, and word[i] is a char, which is not at all the right type. If printLetter() is really just supposed to print a single character, you should change its argument to be a char.
void printLetter(int letter[])
should be
void printLetter(char letter)
Because: word is a char[] word[i] is a character.

Resources