Hex array to Float - c

I have an array of
unsigned char array_a[4] = {0x00,0x00,0x08,0x4D};
unsigned char val[4];
float test;
what I want to do is combine all the elements and store it to val to make it 0x0000084D and converter it to float, which is 2125.
I tried memcpy
memcpy(val,array_a,4);
val[4] = '\0';
but still not work.

First, 0x0000084D is the big endian representation of the integer value 2125, not IEEE float.
Second, no need to copy to another char array (and accessing the 5th element out of bounds in an attempt to "nul-terminate" the array). That part makes no sense.
To convert this array to an integer on your host, copy it in a standardized 32 bit integer first, then convert it according to the endianness of your machine (else you'd get a bad value on a little endian machine)
unsigned char array_a[4] = {0x00,0x00,0x08,0x4D};
uint32_t the_int;
memcpy(&the_int,array_a,sizeof(uint32_t));
the_int = ntohl(the_int);
printf("%d\n",the_int);
or without any external conversion libs using bit shifting making it endian-independent:
uint32_t the_int = 0;
int i;
for (i=0;i<sizeof(uint32_t);i++)
{
the_int <<= 8;
the_int += array_a[i];
}
you get 2125 all right, now you can assign it to a float if you like
float test = the_int;

Related

Float to Binary in C

I am asked to convert a float number into a 32 bit unsigned integer. I then have to check if all the bits are zero but I am having trouble with this. Sorry I am new to C
This is what I am doing
float number = 12.5;
// copying number into a 32-bit unsigned int
unsigned int val_number = *((unsigned int*) &number);
At this point I'm very confused on how to check if all bits are zero.
I think I need to loop through all the bits but I don't know how to do that.
To copy the bytes of a 32-bit float to an integer, best to copy to an integer type that is certainly 32-bit. unsigned may be less, same or more than 32-bits.
#include <inttypes.h>
float number = 12.5;
uint32_t val_number32; // 32-bit type
memcpy(&val_number32, &number, sizeof val_number32);
Avoid the cast and assign. It leads to aliasing problems with modern compilers #Andrew.
"... need cast the addresses of a and b to type (unsigned int *) and then dereference the addresses" reflects a risky programing technique.
To test if the bits of the unsigned integer are all zero, simply test with the constant 0.
int bit_all_zero = (val_number32 == 0);
An alternative is to use a union to access the bytes from 2 different encodings.
union {
float val_f;
uint32_t val_u;
} x = { .val_f = 12.5f };
int bit_all_zero = (x.val_u == 0);
Checking if all the bits are zero is equivalent to checking if the number is zero.
So it would be int is_zero = (val_number == 0);

C - unsigned int to unsigned char array conversion

I have an unsigned int number (2 byte) and I want to convert it to unsigned char type. From my search, I find that most people recommend to do the following:
unsigned int x;
...
unsigned char ch = (unsigned char)x;
Is the right approach? I ask because unsigned char is 1 byte and we casted from 2 byte data to 1 byte.
To prevent any data loss, I want to create an array of unsigned char[] and save the individual bytes into the array. I am stuck at the following:
unsigned char ch[2];
unsigned int num = 272;
for(i=0; i<2; i++){
// how should the individual bytes from num be saved in ch[0] and ch[1] ??
}
Also, how would we convert the unsigned char[2] back to unsigned int.
Thanks a lot.
You can use memcpy in that case:
memcpy(ch, (char*)&num, 2); /* although sizeof(int) would be better */
Also, how would be convert the unsigned char[2] back to unsigned int.
The same way, just reverse the arguments of memcpy.
How about:
ch[0] = num & 0xFF;
ch[1] = (num >> 8) & 0xFF;
The converse operation is left as an exercise.
How about using a union?
union {
unsigned int num;
unsigned char ch[2];
} theValue;
theValue.num = 272;
printf("The two bytes: %d and %d\n", theValue.ch[0], theValue.ch[1]);
It really depends on your goal: why do you want to convert this to an unsigned char? Depending on the answer to that there are a few different ways to do this:
Truncate: This is what was recomended. If you are just trying to squeeze data into a function which requires an unsigned char, simply cast uchar ch = (uchar)x (but, of course, beware of what happens if your int is too big).
Specific endian: Use this when your destination requires a specific format. Usually networking code likes everything converted to big endian arrays of chars:
int n = sizeof x;
for(int y=0; n-->0; y++)
ch[y] = (x>>(n*8))&0xff;
will does that.
Machine endian. Use this when there is no endianness requirement, and the data will only occur on one machine. The order of the array will change across different architectures. People usually take care of this with unions:
union {int x; char ch[sizeof (int)];} u;
u.x = 0xf00
//use u.ch
with memcpy:
uchar ch[sizeof(int)];
memcpy(&ch, &x, sizeof x);
or with the ever-dangerous simple casting (which is undefined behavior, and crashes on numerous systems):
char *ch = (unsigned char *)&x;
Of course, array of chars large enough to contain a larger value has to be exactly as big as this value itself.
So you can simply pretend that this larger value already is an array of chars:
unsigned int x = 12345678;//well, it should be just 1234.
unsigned char* pChars;
pChars = (unsigned char*) &x;
pChars[0];//one byte is here
pChars[1];//another byte here
(Once you understand what's going on, it can be done without any variables, all just casting)
You just need to extract those bytes using bitwise & operator. OxFF is a hexadecimal mask to extract one byte. Please look at various bit operations here - http://www.catonmat.net/blog/low-level-bit-hacks-you-absolutely-must-know/
An example program is as follows:
#include <stdio.h>
int main()
{
unsigned int i = 0x1122;
unsigned char c[2];
c[0] = i & 0xFF;
c[1] = (i>>8) & 0xFF;
printf("c[0] = %x \n", c[0]);
printf("c[1] = %x \n", c[1]);
printf("i = %x \n", i);
return 0;
}
Output:
$ gcc 1.c
$ ./a.out
c[0] = 22
c[1] = 11
i = 1122
$
Endorsing #abelenky suggestion, using an union would be a more fail proof way of doing this.
union unsigned_number {
unsigned int value; // An int is 4 bytes long
unsigned char index[4]; // A char is 1 byte long
};
The characteristics of this type is that the compiler will allocate memory only for the biggest member of our data structure unsigned_number, which in this case is going to be 4 bytes - since both members (value and index) have the same size. Had you defined it as a struct instead, we would have 8 bytes allocated on memory, since the compiler does its allocation for all the members of a struct.
Additionally, and here is where your problem is solved, the members of an union data structure all share the same memory location, which means they all refer to same data - think of that like a hard link on GNU/Linux systems.
So we would have:
union unsigned_number my_number;
// Assigning decimal value 202050300 to my_number
// which is represented as 0xC0B0AFC in hex format
my_number.value = 0xC0B0AFC; // Representation: Binary - Decimal
// Byte 3: 00001100 - 12
// Byte 2: 00001011 - 11
// Byte 1: 00001010 - 10
// Byte 0: 11111100 - 252
// Printing out my_number one byte at time
for (int i = 0; i < (sizeof(my_number.value)); i++)
{
printf("index[%d]: %u, 0x%x\n", \
i, my_number.index[i], my_number.index[i]);
}
// Printing out my_number as an unsigned integer
printf("my_number.value: %u, 0x%x", my_number.value, my_number.value);
And the output is going to be:
index[0]: 252, 0xfc
index[1]: 10, 0xa
index[2]: 11, 0xb
index[3]: 12, 0xc
my_number.value: 202050300, 0xc0b0afc
And as for your final question, we wouldn't have to convert from unsigned char back to unsigned int since the values are already there. You just have to choose by which way you want to access it
Note 1: I am using an integer of 4 bytes in order to ease the understanding of the concept. For the problem you presented you must use:
union unsigned_number {
unsigned short int value; // A short int is 2 bytes long
unsigned char index[2]; // A char is 1 byte long
};
Note 2: I have assigned byte 0 to 252 in order to point out the unsigned characteristic of our index field. Was it declared as a signed char, we would have index[0]: -4, 0xfc as output.

Type coercion in c: unsigned int to float

I'm communicating serially between a host pc and an embedded processor. On the embedded side, I need to parse character strings for floating point and integer data. What I am currently doing is something along these lines:
inline float32* fp_unpack(float32* dest, volatile char* str) {
Uint32 temp = (Uint32)str[3]<<24;
temp |= (Uint32)str[2]<<16;
temp |= (Uint32)str[1]<<8;
temp |= (Uint32)str[0];
temp = (float32)temp;
*dest = (float32)temp;
return dest;
}
Where str has four characters, each representing a byte of the float. The bytes in string are ordered little endian.
As an example, I'm trying to extract the number 100.0 from str. I've verified the contents of string are:
s[0]: 0x00,
s[1]: 0x00,
s[2]: 0x20,
s[3]: 0x41,
which is the 32 bit floating point representation of 100.0. Furthermore, I've verified that the function successfully sets temp to 0x41200000. However, dest ends up being 0x4e824000. I know the problem arises from the line: *dest = (float32)temp, which I hoped would simply copy the bits from temp to dest, with a typecast to make the compiler happy.
However, I've realized that this won't be the case, since the operation: float x = (float)4/3 actually converts 4 to 4.0, ie changing the bits.
How do I coerce the bits in temp into dest?
Thanks in advance
edit: Note that 0x4120000 as an integer is 1092616192, which, as a float, is 0x4e82400
You need to cast the pointers. Casting the values simply converts the int to float. Try:
*dest = *((float32*)&temp);
The portable way that does not invoke undefined behavior due to aliasing rules violations:
float f;
uint32_t i;
memcpy(&f, &i, sizeof f);
Here is one more solution:
union test {
float f;
unsigned int i;
} x;
float flt = 100.0;
unsigned int uint;
x.f = flt;
uint = x.i;
Now unit has the bit pattern as it was in f.
Isn't Hex (IEEE754 ) representation of float 100.0 -->0x42c80000

How to convert from integer to unsigned char in C, given integers larger than 256?

As part of my CS course I've been given some functions to use. One of these functions takes a pointer to unsigned chars to write some data to a file (I have to use this function, so I can't just make my own purpose built function that works differently BTW). I need to write an array of integers whose values can be up to 4095 using this function (that only takes unsigned chars).
However am I right in thinking that an unsigned char can only have a max value of 256 because it is 1 byte long? I therefore need to use 4 unsigned chars for every integer? But casting doesn't seem to work with larger values for the integer. Does anyone have any idea how best to convert an array of integers to unsigned chars?
Usually an unsigned char holds 8 bits, with a max value of 255. If you want to know this for your particular compiler, print out CHAR_BIT and UCHAR_MAX from <limits.h> You could extract the individual bytes of a 32 bit int,
#include <stdint.h>
void
pack32(uint32_t val,uint8_t *dest)
{
dest[0] = (val & 0xff000000) >> 24;
dest[1] = (val & 0x00ff0000) >> 16;
dest[2] = (val & 0x0000ff00) >> 8;
dest[3] = (val & 0x000000ff) ;
}
uint32_t
unpack32(uint8_t *src)
{
uint32_t val;
val = src[0] << 24;
val |= src[1] << 16;
val |= src[2] << 8;
val |= src[3] ;
return val;
}
Unsigned char generally has a value of 1 byte, therefore you can decompose any other type to an array of unsigned chars (eg. for a 4 byte int you can use an array of 4 unsigned chars). Your exercise is probably about generics. You should write the file as a binary file using the fwrite() function, and just write byte after byte in the file.
The following example should write a number (of any data type) to the file. I am not sure if it works since you are forcing the cast to unsigned char * instead of void *.
int homework(unsigned char *foo, size_t size)
{
int i;
// open file for binary writing
FILE *f = fopen("work.txt", "wb");
if(f == NULL)
return 1;
// should write byte by byte the data to the file
fwrite(foo+i, sizeof(char), size, f);
fclose(f);
return 0;
}
I hope the given example at least gives you a starting point.
Yes, you're right; a char/byte only allows up to 8 distinct bits, so that is 2^8 distinct numbers, which is zero to 2^8 - 1, or zero to 255. Do something like this to get the bytes:
int x = 0;
char* p = (char*)&x;
for (int i = 0; i < sizeof(x); i++)
{
//Do something with p[i]
}
(This isn't officially C because of the order of declaration but whatever... it's more readable. :) )
Do note that this code may not be portable, since it depends on the processor's internal storage of an int.
If you have to write an array of integers then just convert the array into a pointer to char then run through the array.
int main()
{
int data[] = { 1, 2, 3, 4 ,5 };
size_t size = sizeof(data)/sizeof(data[0]); // Number of integers.
unsigned char* out = (unsigned char*)data;
for(size_t loop =0; loop < (size * sizeof(int)); ++loop)
{
MyProfSuperWrite(out + loop); // Write 1 unsigned char
}
}
Now people have mentioned that 4096 will fit in less bits than a normal integer. Probably true. Thus you can save space and not write out the top bits of each integer. Personally I think this is not worth the effort. The extra code to write the value and processes the incoming data is not worth the savings you would get (Maybe if the data was the size of the library of congress). Rule one do as little work as possible (its easier to maintain). Rule two optimize if asked (but ask why first). You may save space but it will cost in processing time and maintenance costs.
The part of the assignment of: integers whose values can be up to 4095 using this function (that only takes unsigned chars should be giving you a huge hint. 4095 unsigned is 12 bits.
You can store the 12 bits in a 16 bit short, but that is somewhat wasteful of space -- you are only using 12 of 16 bits of the short. Since you are dealing with more than 1 byte in the conversion of characters, you may need to deal with endianess of the result. Easiest.
You could also do a bit field or some packed binary structure if you are concerned about space. More work.
It sounds like what you really want to do is call sprintf to get a string representation of your integers. This is a standard way to convert from a numeric type to its string representation. Something like the following might get you started:
char num[5]; // Room for 4095
// Array is the array of integers, and arrayLen is its length
for (i = 0; i < arrayLen; i++)
{
sprintf (num, "%d", array[i]);
// Call your function that expects a pointer to chars
printfunc (num);
}
Without information on the function you are directed to use regarding its arguments, return value and semantics (i.e. the definition of its behaviour) it is hard to answer. One possibility is:
Given:
void theFunction(unsigned char* data, int size);
then
int array[SIZE_OF_ARRAY];
theFunction((insigned char*)array, sizeof(array));
or
theFunction((insigned char*)array, SIZE_OF_ARRAY * sizeof(*array));
or
theFunction((insigned char*)array, SIZE_OF_ARRAY * sizeof(int));
All of which will pass all of the data to theFunction(), but whether than makes any sense will depend on what theFunction() does.

converting byte array to double - c

I'm trying to get the numerical (double) value from a byte array of 16 elements, as follows:
unsigned char input[16];
double output;
...
double a = input[0];
distance = a;
for (i=1;i<16;i++){
a = input[i] << 8*i;
output += a;
}
but it does not work.
It seems that the temporary variable that contains the result of the left-shift can store only 32 bits, because after 4 shift operations of 8 bits it overflows.
I know that I can use something like
a = input[i] * pow(2,8*i);
but, for curiosity, I was wondering if there's any solution to this problem using the shift operator...
Edit: this won't work (see comment) without something like __int128.
a = input[i] << 8*i;
The expression input[i] is promoted to int (6.3.1.1) , which is 32bit on your machine. To overcome this issue, the lefthand operand has to be 64bit, like in
a = (1L * input[i]) << 8*i;
or
a = (long long unsigned) input[i] << 8*i;
and remember about endianness
The problem here is that indeed the 32 bit variables cannot be shifted more than 4*8 times, i.e. your code works for 4 char's only.
What you could do is find the first significant char, and use Horner's law: anxn + an-1n-1 + ... = ((...( anx + an-1 ).x + an-2 ) . x + ... ) + a0 as follows:
char coefficients[16] = { 0, 0, ..., 14, 15 };
int exponent=15;
double result = 0.;
for(int exponent = 15; exp >= 0; --exp ) {
result *= 256.; // instead of <<8.
result += coefficients[ exponent ];
}
In short, No, you can't convert a sequence of bytes directly into a double by bit-shifting as shown by your code sample.
byte, an integer type and double, a floating point type (i.e. not an integer type) are not bitwise compatible (i.e. you can't just bitshift to values of a bunch of bytes into a floating point type and expect an equivalent result.)
1) Assuming the byte array is a memory buffer referencing an integer value, you should be able to convert your byte array into a 128-bit integer via bit-shifting and then convert that resulting integer into a double. Don't forget that endian-issues may come into play depending on the CPU architecture.
2) Assuming the byte array is a memory buffer that contains a 128-bit long double value, and assuming there are no endian issues, you should be able to memcpy the value from the byte array into the long double value
union doubleOrByte {
BYTE buffer[16];
long double val;
} dOrb;
dOrb.val = 3.14159267;
long double newval = 0.0;
memcpy((void*)&newval, (void*)dOrb.buffer, sizeof(dOrb.buffer));
Why not simply cast the array to a double pointer?
unsigned char input[16];
double* pd = (double*)input;
for (int i=0; i<sizeof(input)/sizeof(double); ++i)
cout << pd[i];
if you need to fix endian-ness, reverse the char array using the STL reverse() before casting to a double array.
Have you tried std::atof:
http://www.cplusplus.com/reference/clibrary/cstdlib/atof/
Are you trying to convert a string representation of a number to a real number? In that case, the C-standard atof is your best friend.
Well based off of operator precedence the right hand side of
a = input[i] << 8*i;
gets evaluated before it gets converted to a double, so you are shifting input[i] by 8*i bits, which stores its result in a 32 bit temporary variable and thus overflows. You can try the following:
a = (long long unsigned int)input[i] << 8*i;
Edit: Not sure what the size of a double is on your system, but on mine it is 8 bytes, if this is the case for you as well the second half of your input array will never be seen as the shift will overflow even the double type.

Resources