Converting from hexadecimal string to byte array in C - c

I am sending byte arrays between a TCP socket server & client in C. The information that I am sending is a series of integers.
I have it working, but because I am not too conversant with C, I was wondering if anyone could suggest a better solution, or at least to look and tell me that I'm not being too crazy or using outdated code with what I'm doing.
First, I generate a random decimal value, let's say "350". I need to transmit this over the socket connection as a hex byte array. It is decoded back to its decimal value at the other end.
So far, I convert it to hex this way:
unsigned char hexstr[4];
sprintf(hexstr, "%02X", numToConvert); \\ where numToConvert is a decimal integer value like 350
At this point, I have a string in hexstr that's something like "15E" (again, using the hex value of 350 for an example).
Now, I need to store this in a byte array so that it looks something like: myArray = {0X00, 0X00, 0X01, 0X5E};
Obviously I can't just write: myArray = {0X00, 0X00, 0X01, 0X5E} because the values will be different every time, since a new random number is generated every time.
Currently, I do it like this (pseudocode because the string manipulation part is irrelevant but long):
lastTwoChars = getLastTwoCharsFromString(hexstr); // so lastTwoChars would now contain "5E"
Then (actual code):
sscanf(lastTwoChars, "%0X", &res); // now the variable res contains the byte representation of lastTwoChars, is my understanding
Then finally:
myArray[3] = res;
Then, I take the next two rightmost chars from hexstr (again, using the sample value of "15E", this would be "01" -- if there's only 1 more character, as in this case "1" was the only character left after taking out "5E" from "15E", I add 0s to the left to pad) and convert that the same way using sscanf, then insert into myArray[2]. Repeat for myArray[1] and myArray[0].
Then I send the array using write().
So, after hours of plugging away at it, this all does work... but because I don't use C very much, I have a nagging suspicion that there's something I am missing in all this. Can anyone comment if what I'm doing seems OK, or there's something obvious I'm using improperly or neglecting to use?

#include <stdio.h>
#include <limits.h>
int main(){
unsigned num = 0x15E;//num=350
int i, size = sizeof(unsigned);
unsigned char myArray[size];
for(i=size-1;i>=0;--i, num>>=CHAR_BIT){
myArray[i] = num & 0xFF;
}
for(i=0;i<size;++i){
printf("0X%02hhX ", myArray[i]);//0X02X
}
printf("\n");
return 0;
}

On the transmit side, convert a 32-bit number to a four byte array with this code
void ConvertValueToArray( uint32_t value, uint8_t array[] )
{
int i;
for ( i = 3; i >= 0; i-- )
{
array[i] = value & 0xff;
value >>= 8;
}
}
On the receive side, convert the byte array back into a number with this code
uint32_t ConvertArrayToValue( uint8_t array[] )
{
int i;
uint32_t value = 0;
for ( i = 0; i < 4; i++ )
{
value <<= 8;
value |= array[i];
}
return( value );
}
Note that it's important not to use generic types like int when writing this kind of code, since an int can be different sizes on different systems. The fixed-sized types are defined in <stdint.h>.
Here's a simple test that demonstrates the conversions (without actually sending the byte arrays over the network).
#include <stdio.h>
#include <stdint.h>
int main( void )
{
uint32_t input, output;
uint8_t byte_array[4];
input = 350;
ConvertValueToArray( input, byte_array );
output = ConvertArrayToValue( byte_array );
printf( "%u\n", output );
}

If your array is 4-byte aligned (and even if it isn't on machines that support unaligned access), you can use the htonl function to convert a 32-bit integer from host to network byte order and store the whole thing at once:
#include <arpa/inet.h> // or <netinet/in.h>
...
*(uint32_t*)myArray = htonl(num);

Related

is thare a way to convert a int into binary and put it into a array

I am making a program for my arduino that needs to access a eeprom but I need to find a way to send it a address, I have a int I would like to convert into binary and send to the eeprom but I need to split it into a array so I can send the data to the eeprom.
I cant think of any way to do this and I have asked some people for help but they couldn't figure out how to ether.
void int_to_bin_array(unsigned int in, int count, int* out)
{
unsigned int mask = 1U << (count-1);
int i;
for (i = 0; i < count; i++) {
out[i] = (in & mask) ? 1 : 0;
in <<= 1;
}
}
int main(void)
{
int binary_array[8];
const int bin_size = 8;
int decimal = 15;
int_to_bin_array(decimal, bin_size, binary_array);
return 0;
}
Memory addresses are hexadecimal values, not binary. You don't need to do this here.
You need to correctly understand memory interfacing with your MCU (Arduino in this case) along with embedded systems concepts and pointers in C.
You can specify address values in hexadecimal values directly (uint_t addr = (uint_t *) 0x1234ABCD) and the compiler will automatically convert it to respective binary, during compilation.

Compute the crc16 of a bytearray / userdata in lua

I am writing a Wireshark protocol dissector in lua. The protocol it parses contains a crc16 checksum. The dissector should check whether the crc is correct.
I have found a crc16 implementation written in C already with the lua wrapper code here. I have successfully compiled it and run it (e.g. crc16.compute("test")). The problem is it expects a string as input. From wireshark, I get a buffer that seems to be of lua type userdata. So when I do
crc16.compute(buffer(5, 19))
Lua complains bad argument #1 to compute (string expected, got userdata).
compute() in the crc16 implementation looks like this:
static int compute(lua_State *L)
{
const char *data;
size_t len = 0;
unsigned short r, crc = 0;
data = luaL_checklstring(L, 1, &len);
for ( ; len > 0; len--)
{
r = (unsigned short)(crc >> 8);
crc <<= 8;
crc ^= crc_table[r ^ *data];
data ++;
}
lua_pushinteger(L, crc);
return 1;
}
It seems luaL_checklstring fails. So I guess I would either need to convert the input into a lua string, which I am not sure it works, as not all bytes of my input are necessarily printable characters. Or I would need to adjust the above code so it accepts input of type userdata. I found lua_touserdata(), but this seems to return something like a pointer. So I would need a second argument for the length, right?
I don't necessarily need to use this implementation. Any crc16 implementation for lua that accepts userdata would perfectly solve the problem.
The buffer that you get from wireshark can be used as a ByteArray like this:
byte_array = Buffer(5,19):bytes();
ByteArray has a _toString function that converts the bytes into a string representation of the bytes represented as hex. So you can call the crc function like this:
crc16.compute(tostring(byte_array))
'Representation of the bytes represented as hex' means an input byte with the bits 11111111 will turn into the ASCII string FF. The ASCII string FF is 01000110 01000110 in bits or 46 46in hex. This means what you get in C, is not the original bytearray. You need to decode the ascii representation back into the original bytes before computing the crc, otherwise we will obviously get a different crc.
First, this function converts a single character c containing one ascii hex character back into the value it represents:
static char ascii2char(char c) {
c = tolower(c);
if(c >= '0' && c <= '9')
return c - '0';
else if(c >= 'a' && c <= 'f')
return c - 'a' + 10;
}
Now in the compute function we loop through the string representation, always combining two characters into one byte.
int compute(lua_State *L) {
size_t len;
const char * str = lua_tolstring(L, 1, &len);
uint8_t * data = (uint8_t *) malloc(len/2);
for(int n=0; n<len/2; n++) {
data[n] = ascii2char(str[2*n]) << 4;
data[n] |= ascii2char(str[2*n+1]);
}
crc16_t crc = crc16_init();
crc = crc16_update(crc, data, len/2);
crc = crc16_finalize(crc);
lua_pushinteger(L, crc);
free(data);
return 1;
}
In this example, I used the crc functions crc16_init, crc16_update and crc16_finalize generated using pycrc, not the crc implementation linked in the question. The problem is that you need to use the same polynom etc. as when generating the crc. Pycrc allows you the generate crc functions as needed.
My packets also contain a crc32. Pycrc can also generate code for crc32, so it works all the same way for crc32.
Christopher K outlines what is mostly the correct answer, but the conversion of hex values back into bytes seemed a little like hardwork, but this got me looking as I was searching for something like this.
The trick missed was that as well as calling the function with a buffer:bytes() you can also call
buffer:raw()
This provides exactly what is needed: a simple TSTRING that can be parsed directly without the need to do ascii conversions that would, I imagine, add significantly to the load in the C code.

Char array to unsigned short conversion issue

I am trying to convert char array to unsigned short but its not working as it should.
char szASCbuf[64] = "123456789123456789123456789";
int StoreToFlash(char szASCbuf[], int StartAddress)
{
int iCtr;
int ErrorCode = 0;
int address = StartAddress;
unsigned short *us_Buf = (unsigned short*)szASCbuf;
// Write to flash
for(iCtr=0;iCtr<28;iCtr++)
{
ErrorCode = Flash_Write(address++, us_Buf[iCtr]);
if((ErrorCode &0x45)!= 0)
{
Flash_ClearError();
}
}
return ErrorCode;
}
When I see the Conversion, on us_Buf[0] I have value 12594, us_Buf[1]= 13108 like that and I have values only uptous_Buf[5]` after that it is "0" all remaining address.
I have tried to declare char array like this also
char szASCbuf[64] = {'1','2','3','4','5','6','7','8','9','1',.....'\0'};
I am passing the parameters to function like this
StoreToFlash(szASCbuf, FlashPointer); //Flashpointe=0
I am using IAR embedded workbench for ARM. Big enedian 32.
Any suggestions where i am doing wrong?
Thanks in advance.
Reinterpreting the char array szASCbuf as an array of short is not safe because of alignment issues. The char type has the least strict alignment requirements and short is usually stricter. This means that szAscBuf might start at address 13, whereas a short should start at either 12 or 14.
This also violates the strict aliasing rule, since szAscBuf and us_Buf are pointing at the same location while having different pointer types. The compiler might perform optimisations which don't take this into account and this could manifest in some very nasty bugs.
The correct way to write this code is to iterate over the original szASCBuf with a step of 2 and then do some bit-twiddling to produce a 2-byte value out of it:
for (size_t i = 0; i < sizeof(szAscbuf); i += 2) {
uint16_t value = (szAscbuf[i] << 8) | szAscbuf[i + 1];
ErrorCode = Flash_Write(address++, value);
if (ErrorCode & 0x45) {
Flash_ClearError();
}
}
If you really intended to treat the digit characters with their numeric value, this will do it:
uint16_t value = (szAscbuf[i] - '0') + (szAscbuf[i + 1] - '0');
In case you just want the numeric value of each character in a 2-byte value (1, 2, 3, 4, ...), iterate over the array with a step of 1 and fetch it this way:
uint16_t value = szAscbuf[i] - '0';
That's normal !
Your char array is "123456789123456789123456789" or {'1','2','3','4','5','6','7','8','9','1',.....'\0'}
But in ASCII '1' is 0x31, so when you read the array as a short * on a big endian architecture, it gives :
{ 0x3132, 0x3334, ... }
say differently in decimal :
{ 12594, 13108, ... }

converting data types in c

Let me start by saying that I openly admit this is for a homework assignment, but what I am asking is not related to the purpose of the assignment, just something I don't understand in C. This is just a very small part of a large program.
So my issue is, I have a set of data that consists various data types as follows:
[16 bit number][16 but number][16 bit number][char[234]][128 bit number]
where each block represents a variable from elsewhere in the program.
I need to send that data 8bytes at a time into a function that accepts uint32_t[2] as an input. How do I convert my 234byte char array into uint32_t without losing the char values?
In other words, I need to be able to convert back from the uint32_t version to the original char array later on. I know a char is 1byte, and the value can also be represented as a number in relation to its ascii value, but not sure how to convert between the two since some letters have a 3 digit ascii value and others have 2.
I tried to use sprintf to grab 8byte blocks from the data set, and store that value in a uint32_t[2] variable. It works, but then I lose the original char array because I can't figure out way to go back/undo it.
I know there has to be a relatively simple way to do this, i'm just lacking enough skill in C to make it happen.
Your question is very confusing, but I am guessing you are preparing some data structure for encryption by a function that requires 8 bytes or 2 uint32_t's.
You can convert a char array to uint32_t as follows
#define NELEM 234
char a[NELEM];
uint64_t b[(NELEM+sizeof(uint64_t)-1)/sizeof(uint64_t)]; // this rounds up to nearest modulo 4
memcpy(b,a,NELEM);
for(i .. ) {
encryption_thing(b[i]);
}
or
If you need to change endianess or something, that is more complicated.
#include <stdint.h>
void f(uint32_t a[2]) {}
int main() {
char data[234]; /* GCC can explicitly align with this: __attribute__ ((aligned (8))) */
int i = 0;
int stride = 8;
for (; i < 234 - stride; i += stride) {
f((uint32_t*)&data[i]); }
return 0; }
I need to send that data 8bytes at a time into a function that accepts
uint32_t[2] as an input. How do I convert my 234byte char array into
uint32_t without losing the char values?
you could use a union for this
typedef union
{
unsigned char arr[128]; // use unsigned char
uint32_t uints[16]; // 128/8
} myvaluetype;
myvaluetype value;
memcpy(value.arr, your_array, sizeof(value.arr));
say the prototype that you want to feed 2 uint32_t at a time is something like
foo(uint32_t* p);
you can now send the data 8 bytes at the time by
for (int i = 0; i < 16; i += 2)
{
foo(myvaluetype.uints + i);
}
then use the same struct to convert back.
of course some care must be taken about padding/alignment you also don't mention if it is sent over a network etc so there are other factors to consider.

Convert HEX value to (chars, string, letters and numbers)

I'm programming AVR microcontroller Atmega16 in C.
I don't understand C, i spend my time programming in PHP, and i just don't get it in C.
I don't get also how can i post sourcecode here.
If someone know, please fix my tags.
My problem is, i have this function:
unsigned char
convert_sn_to_string (const unsigned char *SN_a /* IDstring */ )
{
unsigned char i;
for (i = 0; i < 6; i++) //6 byte az SN
{
if(SN_a[i]==0x00)
{
Send_A_String("00");
}
}
return 1;
}
Inside for loop i can acces 6 byte value by hex.
Inside variable SN_A i can have
SN_A[0]=0x00;
SN_A[1]=0xFF;
SN_A[3]=0xAA;
SN_A[4]=0x11;
(...)
and similar.
It could be from 00 to FF.
What i need, is to convert that code to 12 char string.
If i have
SN_A[0]=0x00;
SN_A[1]=0xFF;
SN_A[3]=0xAA;
SN_A[4]=0x11;
(...)
I would like get at output
new[0]=0;
new[1]=0;
new[2]=A;
new[3]=A;
new[4]=1;
new[5]=1;
(...)
and so to 12, because i would like change that 6 (double) AA values, to separated.
So then i can do a loop
for i=0 i<12 i++
{
do_something_with_one_letter(new[i]);
}
Now i can play with that values, i can send them to display, or anything i need.
A hex value is simply another way of writing integers. 0xFF == 255.
So, if you want to "split" them, you need to first decide how you want to split them. This, essentially, decides on the exact way you stuff the split values into your new array.
To split a value, something like this can be used:
hexval = 0x1A
low_nybble = hexval & 0xF
high_nybble = (hexval >> 4) & 0xF
You now have 1 stored in high_nybble and 10 stored in low_nybble.

Resources