What is the C equivalent for reinterpret_cast? - c

What is the C equivalent for the reinterpret_cast from C++?

int *foo;
float *bar;
// c++ style:
foo = reinterpret_cast< int * >(bar);
// c style:
foo = (int *)(bar);

If you can take the address of the value, one way is to cast a pointer to it to a pointer to a different type, and then dereference the pointer.
For example, an float-to-int conversion:
int main()
{
float f = 1.0f;
printf ("f is %f\n", f);
printf ("(int) f is %d\n", (int)f);
printf ("f as an unsigned int:%x\n", *(unsigned int *)&f);
}
Output:
f is 1.000000
(int) f is 1
f as an unsigned int:3f800000
Note that this is probably not guaranteed to work by the C standard. You cannot use reinterpret_cast to cast from float to int anyway, but it would be similar for a type that was supported (for example, between different pointer types).
Let's confirm the output above makes sense, anyway.
http://en.wikipedia.org/wiki/Single_precision_floating-point_format#IEEE_754_single-precision_binary_floating-point_format:_binary32
The last answer in binary:
0011 1111 1000 0000 0000 0000 0000 0000
This is IEEE-754 floating point format: a sign bit of 0, followed by an 8-bit exponent (011 1111 1), followed by a 23 bit mantissa (all zeroes).
To interpret the exponent, subtract 127: 01111111b = 127, and 127 - 127 = 0. The exponent is 0.
To interpret the mantissa, write it after 1 followed by a decimal point: 1.00000000000000000000000 (23 zeroes). This is 1 in decimal.
Hence the value represented by hex 3f800000 is 1 * 2^0 = 1, as we expected.

C-style casts just look like type names in parenthesis:
void *p = NULL;
int i = (int)p; // now i is most likely 0
Obviously there are better uses for casts than this, but that's the basic syntax.

It doesn't exist, because reinterpret_cast can not change [constness][3]. For example,
int main()
{
const unsigned int d = 5;
int *g=reinterpret_cast< int* >( &d );
(void)g;
}
will produce the error:
dk.cpp: In function 'int main()':
dk.cpp:5:41: error: reinterpret_cast from type 'const unsigned int*' to type 'int*' casts away qualifiers

A C-style cast is:
int* two = ...;
pointerToOne* one = (pointerToOne*)two;

What about a REINTERPRET operator for c:
#define REINTERPRET(new_type, var) ( * ( (new_type *) & var ) )
I don't like to say "reinterpret_cast", because cast means conversion (in c),
while reinterpret means the opposite: no conversion.

You can freely cast pointer types in C as you would any other type.
To be complete:
void *foo;
some_custom_t *bar;
other_custom_t *baz;
/* Initialization... */
foo = (void *)bar;
bar = (some_custom_t *)baz;
baz = (other_custom_t *)foo;

Related

%d for unsigned integer

I accidentally used "%d" to print an unsigned integer using an online compiler. I thought errors would pop out, but my program can run successfully. It's good that my codes are working, but I just don't understand why.
#include <stdio.h>
int main() {
unsigned int x = 1
printf( "%d", x);
return 0;
}
The value of the "unsigned integer" was small enough that the MSB (most significant bit) was not set. If it were, printf() would have treated the value as a "negative signed integer" value.
int main() {
uint32_t x = 0x5;
uint32_t y = 0xC0000000;
printf( "%d %u %d\n", x, y, y );
return 0;
}
5 3221225472 -1073741824
You can see the difference.
With new-fangled compilers that "read into" printf format specifiers and match those with the datatypes of following parameters, it may be that the online compiler may-or-may-not have been able to report this type mismatch with a warning. This may be something you will want to look into.
refer to printf() manual, they said:
A character that specifies the type of conversion to be applied. The
conversion specifiers and their meanings are:
d, i
The int argument is
converted to signed decimal notation. The precision, if any, gives the
minimum number of digits that must appear; if the converted value
requires fewer digits, it is padded on the left with zeros. The
default precision is 1. When 0 is printed with an explicit precision
0, the output is empty.
so it means that the parameter even if it's in unsigned representation, it will be converted into its signed int representation and printed, see the following code example:
#include <stdio.h>
int main(){
signed int x1 = -2147483648;
unsigned int x2 = -2147483648;
unsigned long long x3 = -2147483648;
printf("signed int x1 = %d\n", x1);
printf("unsigned int x2 = %d\n", x2);
printf("signed long long x3 = %d\n", x3);
}
and this is the output:
signed int x1 = -2147483648
unsigned int x2 = -2147483648
signed long long x3 = -2147483648
so it means no matter what is the type of the variable printed, as long as you specified %d as format specifier, the variable will be converted into its representation in signed int and be printed
in case of unsigned char like for example:
#include <stdio.h>
int main(){
unsigned char s = -10;
printf("s = %d",s);
}
the output is :
s = 246
as the binary representation of unsigned char s = -10 is :
1111 0110
where the MSB bit is 1, but when it's converted into signed int, the new representation is :
0000 0000 0000 0000 0000 0000 1111 0110
so the MSB is no longer have that 1 bit in its MSB which represents whether the number is positive or negative.

Bitwise operation in character

I am curious about a behavior of bit-wise operator of C on Character.
#include <stdio.h>
int main()
{
int x = 108;
x = x<<1;
printf("%d\n", x);
char y = 108;
y = y<<1;
printf("%d", y);
//printf("%d", y<<1);
return 0;
}
Here, if I pass like this, y = y<<1, it's output was -40 and when I print it directly like,
printf("%d", y<<1);
it's output was 216.
How I can simulate it?
Note that there is really no << operation on char types - the operands of << are promoted to (at least) int types, and the result is, similarly, an int.
So, when you do y = y << 1, you are truncating the int result of the operation to a (signed) char, which leaves the most significant bit (the sign bit) set, so it is interpreted as a negative value.
However, when you pass y << 1 directly to printf, the resulting int is left unchanged.
y<<1 produces an int. To get -40, you were implicitly casting it to a char. In your printf case, you'll need to do the cast explicitly: (char)(y<<1)

Error on casting unsigned int to float

For the following program.
#include <stdio.h>
int main()
{
unsigned int a = 10;
unsigned int b = 20;
unsigned int c = 30;
float d = -((a*b)*(c/3));
printf("d = %f\n", d);
return 0;
}
It is very strange that output is
d = 4294965248.000000
When I change the magic number 3 in the expression to calculate d to 3.0, I got correct result:
d = 2000.000000
If I change the type of a, b, c to int, I also got correct result.
I guess this error occurred by the conversion from unsigned int to float, but I do not know details about how the strange result was created.
I think you realize that you casting minus to unsigned int before assignment to float. If you run the below code, you will get highly likely 4294965296
#include <stdio.h>
int main()
{
unsigned int a = 10;
unsigned int b = 20;
unsigned int c = 30;
printf("%u", -((a*b)*(c/3)));
return 0;
}
The -2000 to the right of your equals sign is set up as a signed
integer (probably 32 bits in size) and will have the hexadecimal value
0xFFFFF830. The compiler generates code to move this signed integer
into your unsigned integer x which is also a 32 bit entity. The
compiler assumes you only have a positive value to the right of the
equals sign so it simply moves all 32 bits into x. x now has the
value 0xFFFFF830 which is 4294965296 if interpreted as a positive
number. But the printf format of %d says the 32 bits are to be
interpreted as a signed integer so you get -2000. If you had used
%u it would have printed as 4294965296.
#include <stdio.h>
#include <limits.h>
int main()
{
float d = 4294965296;
printf("d = %f\n\n", d);
return 0;
}
When you convert 4294965296 to float, the number you are using is long to fit into the fraction part. Now that some precision was lost. Because of the loss, you got 4294965248.000000 as I got.
The IEEE-754 floating-point standard is a standard for representing
and manipulating floating-point quantities that is followed by all
modern computer systems.
bit 31 30 23 22 0
S EEEEEEEE MMMMMMMMMMMMMMMMMMMMMMM
The bit numbers are counting from the least-significant bit. The first
bit is the sign (0 for positive, 1 for negative). The following
8 bits are the exponent in excess-127 binary notation; this
means that the binary pattern 01111111 = 127 represents an exponent
of 0, 1000000 = 128, represents 1, 01111110 = 126 represents
-1, and so forth. The mantissa fits in the remaining 24 bits, with
its leading 1 stripped off as described above. Source
As you can see, when doing conversion 4294965296 to float, precision which is 00011000 loss occurs.
11111111111111111111100 00011000 0 <-- 4294965296
11111111111111111111100 00000000 0 <-- 4294965248
This is because you use - on an unsigned int. The - inverts the bits of the number. Lets print some unsigned integers:
printf("Positive: %u\n", 2000);
printf("Negative: %u\n", -2000);
// Output:
// Positive: 2000
// Negative: 4294965296
Lets print the hex values:
printf("Positive: %x\n", 2000);
printf("Negative: %x\n", -2000);
// Output
// Positive: 7d0
// Negative: fffff830
As you can see, the bits are inverted. So the problem comes from using - on unsigned int, not from casting unsigned intto float.
As others have said, the issue is that you are trying to negate an unsigned number. Most of the solutions already given have you do some form of casting to float such that the arithmetic is done on floating point types. An alternate solution would be to cast the results of your arithmetic to int and then negate, that way the arithmetic operations will be done on integral types, which may or may not be preferable, depending on your actual use-case:
#include <stdio.h>
int main(void)
{
unsigned int a = 10;
unsigned int b = 20;
unsigned int c = 30;
float d = -(int)((a*b)*(c/3));
printf("d = %f\n", d);
return 0;
}
Your whole calculation will be done unsigned so it is the same as
float d = -(2000u);
-2000 in unsigned int (assuming 32bits int) is 4294965295
this gets written in your float d. But as float can not save this exact number it gets saved as 4294965248.
As a rule of thumb you can say that float has a precision of 7 significant base 10 digits.
What is calculated is 2^32 - 2000 and then floating point precision does the rest.
If you instead use 3.0 this changes the types in your calculation as follows
float d = -((a*b)*(c/3.0));
float d = -((unsigned*unsigned)*(unsigned/double));
float d = -((unsigned)*(double));
float d = -(double);
leaving you with the correct negative value.
you need to cast the ints to floats
float d = -((a*b)*(c/3));
to
float d = -(((float)a*(float)b)*((float)c/3.0));
-((a*b)*(c/3)); is all performed in unsigned integer arithmetic, including the unary negation. Unary negation is well-defined for an unsigned type: mathematically the result is modulo 2N where N is the number of bits in unsigned int. When you assign that large number to the float, you encounter some loss of precision; the result, due to its binary magnitude, is the nearest number to the unsigned int that divides 2048.
If you change 3 to 3.0, then c / 3.0 is a double type, and the result of a * b is therefore converted to a double before being multiplied. This double is then assigned to a float, with the precision loss already observed.

Hexdecimals in C

I'm confused. Why in this program a gives me 0xFFFFFFA0 but b gives me 0xA0? It's weird.
#include <stdio.h>
int main()
{
char a = 0xA0;
int b = 0xA0;
printf("a = %x\n", a);
printf("b = %x\n", b);
}
Default type of a is signed in char a = 0xA0; and in any signed data type whether its char of int you should be careful of sign bit, if sign bit is set means number will be negative and store as two's compliment way.
char a = 0xA0; /* only 1 byte for a but since sign bit is set,
it gets copied into remaining bytes also */
a => 1010 0000
|
this sign bit gets copied
1111 1111 1111 1111 1111 1111 1010 0000
f f f f f f A 0
In case of int b = 0xA0; sign bit(31st bit) is 0 so what ever it contains i.e 0xA0 will be printed.
Let's take this step-by-step.
char a = 0xA0;
0xA0 in an integer constant with a value of 160 and type int.
In OP's case, a char is encoded like a signed char with an 8-bit range. 160 is more than the maximum 8-bit char and so assigning an out-of-range value to a some signed integer type is implementation defined behavior. In OP's case, the value "wrapped around" and a took on the value of 160 - 256 or -96.
// Try
printf("a = %d\n", a);
With printf() (a variadic function), char a it passed to the ... part and so goes though the usual integer promotions to an int and retains the same value.
printf("a = %x\n", a);
// is just like
printf("a = %x\n", -96);
printf("a = %x\n", a);
With printf(), "%x" expect an unsigned or an int with a value in the non-negative range. With int and -96, it is neither and so the output is undefined behavior.
A typical undefined behavior is this case is to interpret the passed bit pattern as an unsigned. The bit pattern of int -96, as a 32-bit int is 0xFFFFFFA0.
Moral of the story:
Enable a compile warnings. A good compiler would warn about both char a = 0xA0; and printf("a = %x\n", a);
Do not rely on undefined behavior.

Variable size allocated during assign operation

int main()
{
int a;
char b,c;
b=0x32;
c=0x24;
a=b*256+c;
printf("a=%#x\n",a);
return 0;
}
Output:
a=0x3224
Size of b is 1 byte; b*256 is a overflow for a char variable. Does the compiler allocate 2 different 16 bit registers for this operation? int is 16 bits over here.
In the multiplication by the literal 256 on the following line, the char is promoted to an int before multiplication.
a = b*256 + c;
No it doesn't overflow. Instead the contents of variable b (as well as c) are promoted to the type int.
C language never performs arithmetic computations withing the domain of char, short or any other type that is smaller than int. Operands of arithmetic operators are promoted to int before the actual computations begin (assuming int can represent all values of the original type). So, your
a = b * 256 + c;
is actually interpreted by the compiler as
a = (int) b * 256 + (int) c;
In other words, expression b *= 256 would indeed overflow a char variable on assignment back to b, but expression b * 256 does not overflow by itself.

Resources