Converting char to int [C] - c

I have a char byte that I want to convert to an int. Basically I am getting the value (which is 0x13) from a file using the fopen command and storing it into a char buffer called buff.
I am doing the following:
//assume buff[17] = 0x13
v->infoFrameSize = (int)buff[17] * ( 128^0 );
infoFrameSize is a type int that is stored in a structure called 'v'.
The value I get for v->infoFrameSize is 0x00000980. Should this be 0x00000013?
I tried taking out the multiply by 128 ^ 0 and I get the correct output:
v->infoFrameSize = 0x00000013
Any info or suggested reading material on what is happening here would be great. Thanks!

^ is bitwise xor operation, not exponentiation.

Operator ^ in C does bit operation - XOR.
128 xor 0 equals 128.

In C 128 ^ 0 equates the bitwise XOR of 128 and 0, it doesn't raise 128 to the power of 0 (which is just 1).
A char is simply an integer consisting of a single byte, to "convert" it to an int (which isn't really converting, you're just storing the byte into a larger data type) you do:
char c = 5;
int i = (int)c
tada.

There is no point in the ^0 term. Anything xor'd with zero remains unchanged (so 128^0 is 128).
The value you get is correct; when you multiply 0x13 (aka 19) by 128 (aka 0x80), you get 0x0980 (aka 2432).
Why would you expect the assignment to ignore the multiplication?

128^0 is not doing what you think it does.
cout << (128^0)
returns 128.
Try pow(128,0). Then, add the following to the top of your code:
#include <math.h>
Also, note that pow always returns a float. So you'll need to cast your final answer to an int. So:
(int)(buff[17] * pow(128,0));

To convert a char to an int, you merely cast it:
char c = ...;
int x = (int) c;

K&R would have you read the one byte from the file using getc() and store it directly into an int which eliminates any issues you might be seeing. However, if you are reading from the file into an array of bytes, simply cast to int as follows:
v->infoFrameSize = (int)buff[17];

I'm not sure why you're multiplying by 128^0.
The only problem I know of when converting from char to int is that char can be signed or unsigned, depending on the platform. If it happens to be signed, a big positive number stored inside a char may end up being considered as negative. When you will print it, it will either be a negative number or an abnormally big number (if you print it as an unsigned integer).
The solution is simply to use signed char or unsigned char explicitly in cases like this one.

"^" is a bitwise XOR Operation, if you want to do an exponent use
pow(128,0);
Why are you multiplying by one?
You can convert from a char to an int by simply defining an int and setting it like so:
char x = 0x13;
int y;
y = (int)x;

Related

How to convert to integer a char[4] of "hexadecimal" numbers [C/Linux]

So I'm working with system calls in Linux. I'm using "lseek" to navigate through the file and "read" to read. I'm also using Midnight Commander to see the file in hexadecimal. The next 4 bytes I have to read are in little-endian , and look like this : "2A 00 00 00". But of course, the bytes can be something like "2A 5F B3 00". I have to convert those bytes to an integer. How do I approach this? My initial thought was to read them into a vector of 4 chars, and then to build my integer from there, but I don't know how. Any ideas?
Let me give you an example of what I've tried. I have the following bytes in file "44 00". I have to convert that into the value 68 (4 + 4*16):
char value[2];
read(fd, value, 2);
int i = (value[0] << 8) | value[1];
The variable i is 17480 insead of 68.
UPDATE: Nvm. I solved it. I mixed the indexes when I shift. It shoud've been value[1] << 8 ... | value[0]
General considerations
There seem to be several pieces to the question -- at least how to read the data, what data type to use to hold the intermediate result, and how to perform the conversion. If indeed you are assuming that the on-file representation consists of the bytes of a 32-bit integer in little-endian order, with all bits significant, then I probably would not use a char[] as the intermediate, but rather a uint32_t or an int32_t. If you know or assume that the endianness of the data is the same as the machine's native endianness, then you don't need any other.
Determining native endianness
If you need to compute the host machine's native endianness, then this will do it:
static const uint32_t test = 1;
_Bool host_is_little_endian = *(char *)&test;
It is worthwhile doing that, because it may well be the case that you don't need to do any conversion at all.
Reading the data
I would read the data into a uint32_t (or possibly an int32_t), not into a char array. Possibly I would read it into an array of uint8_t.
uint32_t data;
int num_read = fread(&data, 4, 1, my_file);
if (num_read != 1) { /* ... handle error ... */ }
Converting the data
It is worthwhile knowing whether the on-file representation matches the host's endianness, because if it does, you don't need to do any transformation (that is, you're done at this point in that case). If you do need to swap endianness, however, then you can use ntohl() or htonl():
if (!host_is_little_endian) {
data = ntohl(data);
}
(This assumes that little- and big-endian are the only host byte orders you need to be concerned with. Historically, there have been others, which is why the byte-reorder functions come in pairs, but you are extremely unlikely ever to see one of the others.)
Signed integers
If you need a signed instead of unsigned integer, then you can do the same, but use a union:
union {
uint32_t unsigned;
int32_t signed;
} data;
In all of the preceding, use data.unsigned in place of plain data, and at the end, read out the signed result from data.signed.
Suppose you point into your buffer:
unsigned char *p = &buf[20];
and you want to see the next 4 bytes as an integer and assign them to your integer, then you can cast it:
int i;
i = *(int *)p;
You just said that p is now a pointer to an int, you de-referenced that pointer and assigned it to i.
However, this depends on the endianness of your platform. If your platform has a different endianness, you may first have to reverse-copy the bytes to a small buffer and then use this technique. For example:
unsigned char ibuf[4];
for (i=3; i>=0; i--) ibuf[i]= *p++;
i = *(int *)ibuf;
EDIT
The suggestions and comments of Andrew Henle and Bodo could give:
unsigned char *p = &buf[20];
int i, j;
unsigned char *pi= &(unsigned char)i;
for (j=3; j>=0; j--) *pi++= *p++;
// and the other endian:
int i, j;
unsigned char *pi= (&(unsigned char)i)+3;
for (j=3; j>=0; j--) *pi--= *p++;

Character to binary function doesn't work as expected

I have made a function to translate a number to its binary form:
size_t atobin (char n)
{
size_t bin= 0, pow;
for (size_t c= 1; n>0; c++)
{
pow= 1;
for (size_t i= 1; i<c; i++) //This loop is for getting the power of 10
pow*= 10;
bin+= (n%2)*pow;
n/= 2;
}
return bin;
}
It works great for numbers 1 to 127, but for greater numbers (128 to 255) the result is 0... I've tried using the type long long unsigned int for each variable but the result was the same. Someone has an idea about why?
char by default in C is considered to be of signed.
char is of 8 bits(mostly). And for signed char the MSB is used for sign. As a result you can only use 7 bits.
(0111 1111)2 = (127)10 The maximum value that your fucntion can work with. (as you are passing a type of variable which can hold 127 at max).
If you use unsigned char then the MSB is not used as sign-bit. All 8 bits are used giving us a maximum possible value (1111 1111)2 = (255)10
For signed number min/max value is -127 to +127.
For unsigned number min/max value is 0 to +255.
So even if you make the type of the passed parameter unsigned char the maximum value it can hold is +255.
A bit more detail:
Q) What happens when you assign >127 values to your char parameter?
It is signed char by default. It is of 8 bits. But it can't hold it. So what will happen?
The result is implementation defined. But
Suppose the value is 130. In binary it is 10000010. In most of the cases this returns -126. So that will be the value of n.
n>0; fails. Loop is never entered. And it returns 0.
Now if we make it unsigned char then it can hold values between 0 and 255 (inclusive). And that is what you want to have here.
Note:
Q) What happens when >255 values are stored in unsigned char?
The value is reduced to modulo of (max value unsigned char can hold+1) which is
256.
So apply modulo operation and put the result. That will be stored in
unsigned char.

how can split integers into bytes without using arithmetic in c?

I am implementing four basic arithmetic functions(add, sub, division, multiplication) in C.
the basic structure of these functions I imagined is
the program gets two operands by user using scanf,
and the program split these values into bytes and compute!
I've completed addition and subtraction,
but I forgot that I shouldn't use arithmetic functions,
so when splitting integer into single bytes,
I wrote codes like
while(quotient!=0){
bin[i]=quotient%2;
quotient=quotient/2;
i++;
}
but since there is arithmetic functions that i shouldn't use..
so i have to rewrite that splitting parts,
but i really have no idea how can i split integer into single byte without using
% or /.
To access the bytes of a variable type punning can be used.
According to the Standard C (C99 and C11), only unsigned char brings certainty to perform this operation in a safe way.
This could be done in the following way:
typedef unsigned int myint_t;
myint_t x = 1234;
union {
myint_t val;
unsigned char byte[sizeof(myint_t)];
} u;
Now, you can of course access to the bytes of x in this way:
u.val = x;
for (int j = 0; j < sizeof(myint_t); j++)
printf("%d ",u.byte[j]);
However, as WhozCrag has pointed out, there are issues with endianness.
It cannot be assumed that the bytes are in determined order.
So, before doing any computation with bytes, your program needs to check how the endianness works.
#include <limits.h> /* To use UCHAR_MAX */
unsigned long int ByteFactor = 1u + UCHAR_MAX; /* 256 almost everywhere */
u.val = 0;
for (int j = sizeof(myint_t) - 1; j >= 0 ; j--)
u.val = u.val * ByteFactor + j;
Now, when you print the values of u.byte[], you will see the order in that bytes are arranged for the type myint_t.
The less significant byte will have value 0.
I assume 32 bit integers (if not the case then just change the sizes) there are more approaches:
BYTE pointer
#include<stdio.h>
int x; // your integer or whatever else data type
BYTE *p=(BYTE*)&x;
x=0x11223344;
printf("%x\n",p[0]);
printf("%x\n",p[1]);
printf("%x\n",p[2]);
printf("%x\n",p[3]);
just get the address of your data as BYTE pointer
and access the bytes directly via 1D array
union
#include<stdio.h>
union
{
int x; // your integer or whatever else data type
BYTE p[4];
} a;
a.x=0x11223344;
printf("%x\n",a.p[0]);
printf("%x\n",a.p[1]);
printf("%x\n",a.p[2]);
printf("%x\n",a.p[3]);
and access the bytes directly via 1D array
[notes]
if you do not have BYTE defined then change it for unsigned char
with ALU you can use not only %,/ but also >>,& which is way faster but still use arithmetics
now depending on the platform endianness the output can be 11,22,33,44 of 44,33,22,11 so you need to take that in mind (especially for code used in multiple platforms)
you need to handle sign of number, for unsigned integers there is no problem
but for signed the C uses 2'os complement so it is better to separate the sign before spliting like:
int s;
if (x<0) { s=-1; x=-x; } else s=+1;
// now split ...
[edit2] logical/bit operations
x<<n,x>>n - is bit shift left and right of x by n bits
x&y - is bitwise logical and (perform logical AND on each bit separately)
so when you have for example 32 bit unsigned int (called DWORD) yu can split it to BYTES like this:
DWORD x; // input 32 bit unsigned int
BYTE a0,a1,a2,a3; // output BYTES a0 is the least significant a3 is the most significant
x=0x11223344;
a0=DWORD((x )&255); // should be 0x44
a1=DWORD((x>> 8)&255); // should be 0x33
a2=DWORD((x>>16)&255); // should be 0x22
a3=DWORD((x>>24)&255); // should be 0x11
this approach is not affected by endianness
but it uses ALU
the point is shift the bits you want to position of 0..7 bit and mask out the rest
the &255 and DWORD() overtyping is not needed on all compilers but some do weird stuff without them especially on signed variables like char or int
x>>n is the same as x/(pow(2,n))=x/(1<<n)
x&((1<<n)-1) is the same as x%(pow(2,n))=x%(1<<n)
so (x>>8)=x/256 and (x&255)=x%256

Using hex values in array index in C

I was reading through a C program which interfaces with hardware registers. The person has been using hexadecimal numbers as the index to an array such as :
app_base_add[0x30]
I know that a[i] means *(a+i) which is *(a+(i*sizeof(typeof(a)))) so a hexadecimal index is probably a offset of the desired memory location in the address space w.r.t app_base_add.
Is this right?
And also given , say :
#define mm2s_start_add 0xc0000000;
how would these assignments be different from each other in usage?
volatile unsigned int *app_base_add;
app_base_add[0x30>>2]=0x1;
app_base_add[0x30>>2]=1<<2;
app_base_add[0x30>>2]=((unsigned int)mm2s_start_add); //is this assignment valid??
app_base_add[0x30>>2]=((unsigned int *)mm2s_start_add);
There is no difference writing 0x30 or 48 as index, it may just be easier to read for the programmer if say his documentation of the memory was written only with hex values but its just a matter of taste.
e.g.
app_base_add[0x30>>2]=0x1;
is the same as writing app_base_add[12]=0x1;
or even app_base_add[0x0C]=0x1;
At compile time, all values are treated them same way, even if they are written in hexadecimal, binary or decimal.
0x2a == 42 == b101010 == 052
The only assignement which could throw a warning is the cast to unsigned int, because your destination type is not an unsigned int
volatile unsigned int *app_base_add;
//app_base_add is a pointer to volatile memory(not initialized :()
app_base_add[0x30>>2]=0x1;
// = app_base_add[12] = 1;
app_base_add[0x30>>2]=1<<2;
// = app_base_add[12] = 4;
app_base_add[0x30>>2]=((unsigned int)mm2s_start_add); //is this assignment valid??
// yes its valid
// = app_base_add[12] = 3221225472
app_base_add[0x30>>2]=((unsigned int *)mm2s_start_add);
// = app_base_add[12] = interpret 3221225472 as an unsigned integer pointer and store it.

converting byte array to double - c

I'm trying to get the numerical (double) value from a byte array of 16 elements, as follows:
unsigned char input[16];
double output;
...
double a = input[0];
distance = a;
for (i=1;i<16;i++){
a = input[i] << 8*i;
output += a;
}
but it does not work.
It seems that the temporary variable that contains the result of the left-shift can store only 32 bits, because after 4 shift operations of 8 bits it overflows.
I know that I can use something like
a = input[i] * pow(2,8*i);
but, for curiosity, I was wondering if there's any solution to this problem using the shift operator...
Edit: this won't work (see comment) without something like __int128.
a = input[i] << 8*i;
The expression input[i] is promoted to int (6.3.1.1) , which is 32bit on your machine. To overcome this issue, the lefthand operand has to be 64bit, like in
a = (1L * input[i]) << 8*i;
or
a = (long long unsigned) input[i] << 8*i;
and remember about endianness
The problem here is that indeed the 32 bit variables cannot be shifted more than 4*8 times, i.e. your code works for 4 char's only.
What you could do is find the first significant char, and use Horner's law: anxn + an-1n-1 + ... = ((...( anx + an-1 ).x + an-2 ) . x + ... ) + a0 as follows:
char coefficients[16] = { 0, 0, ..., 14, 15 };
int exponent=15;
double result = 0.;
for(int exponent = 15; exp >= 0; --exp ) {
result *= 256.; // instead of <<8.
result += coefficients[ exponent ];
}
In short, No, you can't convert a sequence of bytes directly into a double by bit-shifting as shown by your code sample.
byte, an integer type and double, a floating point type (i.e. not an integer type) are not bitwise compatible (i.e. you can't just bitshift to values of a bunch of bytes into a floating point type and expect an equivalent result.)
1) Assuming the byte array is a memory buffer referencing an integer value, you should be able to convert your byte array into a 128-bit integer via bit-shifting and then convert that resulting integer into a double. Don't forget that endian-issues may come into play depending on the CPU architecture.
2) Assuming the byte array is a memory buffer that contains a 128-bit long double value, and assuming there are no endian issues, you should be able to memcpy the value from the byte array into the long double value
union doubleOrByte {
BYTE buffer[16];
long double val;
} dOrb;
dOrb.val = 3.14159267;
long double newval = 0.0;
memcpy((void*)&newval, (void*)dOrb.buffer, sizeof(dOrb.buffer));
Why not simply cast the array to a double pointer?
unsigned char input[16];
double* pd = (double*)input;
for (int i=0; i<sizeof(input)/sizeof(double); ++i)
cout << pd[i];
if you need to fix endian-ness, reverse the char array using the STL reverse() before casting to a double array.
Have you tried std::atof:
http://www.cplusplus.com/reference/clibrary/cstdlib/atof/
Are you trying to convert a string representation of a number to a real number? In that case, the C-standard atof is your best friend.
Well based off of operator precedence the right hand side of
a = input[i] << 8*i;
gets evaluated before it gets converted to a double, so you are shifting input[i] by 8*i bits, which stores its result in a 32 bit temporary variable and thus overflows. You can try the following:
a = (long long unsigned int)input[i] << 8*i;
Edit: Not sure what the size of a double is on your system, but on mine it is 8 bytes, if this is the case for you as well the second half of your input array will never be seen as the shift will overflow even the double type.

Resources