Basically what happens on a 32bit system when I do this:
uint32_t test (void)
{
uint32_t myInt;
myInt = ((0xFFFFFFFF * 0xFFFFFFFF) % 256u );
return myInt;
}
Let's assume that int is 32 bits.
0xFFFFFFFF will have type unsigned int. There are special rules that explain this, but because it is a hexadecimal constant and it doesn't fit in int, but does fit in unsigned int, it ends up as unsigned int.
0xFFFFFFFF * 0xFFFFFFFF will first go through the usual arithmetic conversions, but since both sides are unsigned int, nothing happens. The result of the multiplication is 0xfffffffe00000001 which is reduced to unsigned int by using the modulo 232 value, resulting in the value 1 with type unsigned int.
(unsigned int)1 % 256u is equal to 1 and has type unsigned int. Usual arithmetic conversions apply here too, but again, both operands are unsigned int so nothing happens.
The result is converted to uint32_t, but it's already unsigned int which has the same range.
However, let's instead suppose that int is 64 bits.
0xFFFFFFFF will have type int.
0xFFFFFFFF * 0xFFFFFFFF will overflow! This is undefined behavior. At this point we stop trying to figure out what the program does, because it could do anything. Maybe the compiler would decide not to emit code for this function, or something equally absurd.
This would happen in a so-called "ILP64" or "SILP64" architecture. These architectures are rare but they do exist. We can avoid these portability problems by using 0xFFFFFFFFu.
Unsigned integer overflowing means you can try to put a value greater than the range of what it can hold - but it will wrap and will put a number modulo UINT32_MAX+1. In fact in this case also that will happen provided you append the U or u with the integer literals. Otherwise integer literals are consideredwhen turns out to be signed (As you didn't specify anything), will result in overflow due to multiplication and signed integer overflow which is Undefined Behavior.
Again back to the explanation, here when you are multiplying this (ensuring that they are unsigned it (the result) will wrap in UINT32_MAX+1 by wrapping in it is meant that if it is bigger than uint32_t then the result will be applied over modulous of UINT32_MAX) and then we apply modulo operation with 256u and then that result is stored in uint32_t and returned from the method. (Note that the result of multiplication if overflows will be at first taken as modulo of UINT_MAX+1)
Related
GCC version 5.4.0
Ubuntu 16.04
I have noticed some weird behavior with the right shift in C when I store a value in variable or not.
This code snippet is printing 0xf0000000, the expected behavior
int main() {
int x = 0x80000000
printf("%x", x >> 3);
}
These following two code snippets are printing 0x10000000, which is very weird in my opinion, it is performing logical shifts on a negative number
1.
int main() {
int x = 0x80000000 >> 3
printf("%x", x);
}
2.
int main() {
printf("%x", (0x80000000 >> 3));
}
Any insight would be really appreciated. I do not know if it a specific issue with my personal computer, in which case it can't be replicated, or if it is just a behavior in C.
Quoting from https://en.cppreference.com/w/c/language/integer_constant, for an hexadecimal integer constant without any suffix
The type of the integer constant is the first type in which the value can fit, from the list of types which depends on which numeric base and which integer-suffix was used.
int
unsigned int
long int
unsigned long int
long long int(since C99)
unsigned long long int(since C99)
Also, later
There are no negative integer constants. Expressions such as -1 apply the unary minus operator to the value represented by the constant, which may involve implicit type conversions.
So, if an int has 32 bit in your machine, 0x80000000 has the type unsigned int as it can't fit an int and can't be negative.
The statement
int x = 0x80000000;
Converts the unsigned int to an int in an implementation defined way, but the statement
int x = 0x80000000 >> 3;
Performs a right shift to the unsigned int before converting it to an int, so the results you see are different.
EDIT
Also, as M.M noted, the format specifier %x requires an unsigned integer argument and passing an int instead causes undefined behavior.
Right shift of the negative integer has implementation defined behavior. So when shifting right the negative number you cant "expect" anything
So it is just as it is in your implementation. It is not weird.
6.5.7/5 [...] If E1 has a signed type and a negative value, the resulting value is implementation- defined.
It may also invoke the UB
6.5.7/4 [...] If E1 has a signed type and nonnegative value, and E1×2E2 is representable in the result type, then that is the resulting
value; otherwise, the behavior is undefined.
As noted by #P__J__, the right shift is implementation-dependent, so you should not rely on it to be consistent on different platforms.
As for your specific test, which is on a single platform (possibly 32-bit Intel or another platform that uses two's complement 32-bit representation of integers), but still shows a different behavior:
GCC performs operations on literal constants using the highest precision available (usually 64-bit, but may be even more). Now, the statement x = 0x80000000 >> 3 will not be compiled into code that does right-shift at run time, instead the compiler figures out both operands are constant and folds them into x = 0x10000000. For GCC, the literal 0x80000000 is NOT a negative number. It is the positive integer 2^31.
On the other hand, x = 0x80000000 will store the value 2^31 into x, but the 32-bit storage cannot represent that as the positive integer 2^31 that you gave as an integer literal - the value is beyond the range representable by a 32-bit two's complement signed integer. The high-order bit ends up in the sign bit - so this is technically an overflow, though you don't get a warning or error. Then, when you use x >> 3, the operation is now performed at run-time (not by the compiler), with the 32-bit arithmetic - and it sees that as a negative number.
I have an unsigned int that actually stores a signed int, and the signed int ranges from -128 to 127.
I would like to store this value back in the unsigned int so that I can simply
apply a mask 0xFF and get the signed char.
How do I do the conversion ?
i.e.
unsigned int foo = -100;
foo = (char)foo;
char bar = foo & 0xFF;
assert(bar == -100);
The & 0xFF operation will produce a value in the range 0 to 255. It's not possible to get a negative number this way. So, even if you use & 0xFF somewhere, you will still need to apply a conversion later to get to the range -128 to 127.
In your code:
char bar = foo & 0xFF;
there is an implicit conversion to char. This relies on implementation-defined behaviour but this will work on all but the most esoteric of systems. The most common implementation definition is the inverse of the conversion that applies when converting unsigned char to char.
(Your previous line foo = (char)foo; should be removed).
However,
char bar = foo;
would produce exactly the same effect (again, except for on those esoteric systems).
Since the unsigned int foo value does not reach the boundaries of -128 or 127 the implicit conversion will work for this case. But if unsigned int foo had a bigger value you will be losing bytes at the moment when storing it in a char variable and will get unexpected results on your program.
Answering for C,
If you have an unsigned int whose value was set by assignment of a value of type char (where char happens to be a signed type) or of type signed char, where the assigned value was negative, then the stored value is the arithmetic sum of the assigned negative value and one more than UINT_MAX. This will be far beyond the range of values representable by (signed) char on any C system I have ever encountered. If you convert that value back to (signed) char, whether implicitly or via a cast, "either the result is implementation-defined, or an implementation-defined signal is raised" (C2011, 6.3.1.3/3).
Converting back to the original char value in a way that avoids implementation-defined behavior is a bit tricky (but relying on implementation-defined behavior may afford much easier approaches). Certainly, masking off all but the 8 lowest-order value bits does not do the trick, as it always gives you a positive value. Also, it assumes that char is 8 bits wide, which, in principle, is not guaranteed. It does not necessarily even give you the correct bit pattern, as C permits negative integers to be represented in any of three different ways.
Here's an approach that will work on any conforming C system:
unsigned int foo = SOME_SIGNED_CHAR_VALUE;
signed char bar;
/* ... */
if (foo <= SCHAR_MAX) {
/* foo's value is representable as a signed char */
bar = foo;
} else {
/* mask off the highest-order value bits to yield a value that fits in an int */
int foo2 = foo & INT_MAX;
/* reverse the conversion to unsigned int, as if unsigned int had the same
number of value bits as int; the other bits are already accounted for */
bar = (foo2 - INT_MAX) - 1;
}
That relies only on characteristics of integer representation and conversion that C itself defines.
Don't do it.
Casting to a smaller size may truncate the value. Casting from signed to unsigned or opposite may results wrong value (e.g. 255 -> -1).
If you have to make calculations with different data types, pick one common type, prefereably signed and long int (32-bit), and check boundaries before casting down (to smaller size).
Signed helps you detect underflows (e.g. when result gets less than 0), long int (or just simply: int, which means natural word length) suits for machines (32-bit or 64-bit), and it's big enough for most purposes.
Also try to avoid mixed types in formulas, especially when they contain division (/).
I was going through some examples and unsigned is used in all of them. Is it safe to use int type, if used properly? Like in the example below, 0x12345678 address will have 0xFFFFFFFF written to it irrespective of ptr being defined as signed or unsigned.
volatile unsigned int* ptr = (volatile unsigned int* ) 0x12345678;
*ptr = 0xFFFFFFFF;
Why is it safe to use unsigned instead of signed for register access?
I think that unsigned C arithmetic is closer to machine registers arithmetic. It is because the C language definition says that, if signed integer arithmetic operation results into a number not fitting the given type (overflow) then the actual result of the operation is undefined. So it can be anything. While in unsigned arithmetic:
A computation involving unsigned operands can
never overflow, because a result that cannot be represented by the
resulting unsigned integer type is reduced modulo the number that is
one greater than the largest value that can be represented by the
resulting unsigned integer type.
which is the same as happens in machine registers.
Your example is another case, the line:
*ptr = 0xFFFFFFFF;
creates a constant 0xFFFFFFFF of type unsigned long int. The type is unsigned instead of int because the value is bigger than the maximum integer value. If the variable ptr was of type int*, then the compiler would need to convert unsigned to signed before the assignment. This would cause overflow and the result of this assignment would be undefined. Although I do not know a compiler which would generate any arithmetic operation when casting from unsigned to int.
If ptr was defined as int*, and you would want to be correct, then your line should be:
*ptr = -1;
Most hardware registers are either not arithmetic objects (they may be bitfields rather than semantically numeric), or if they do represent a number it is unusual for them to be semantically signed ( a negative value would make no sense for a baud rate divisor for example).
So often, while a signed type may make no difference at the low level, it makes no semantic sense at the high level.
Critically perhaps, the >> operator, often used in processing hardware register values, has implementation defined behaviour for signed types; so to avoid mixed sign operations and the associated implicit conversions introducing errors, you should use unsigned in most cases.
I would suggest that unless the hardware register is semantically signed numeric, then you should use unsigned consistently for not just the registers, but also operands in expressions with register operands.
The answer is simple: the value you use is unsigned (meaning that the literal 0xFFFFFFFF is unsigned). You could assign it to any data type that can hold it - even a float or double could be used. But, you want to store an unsigned value and see it as an unsigned (for example, the %x specifier of printf, that prints a hex value, casts it to unsigned). So, if your data is in the entire process unsigned, why not have the data type of its container unsigned too?
The processor register you write to doesn't know or care how your compiler defines the value, so long as the correct value is written. I upvoted the question because you tried to find an answer before posting.
As stated before, being unsigned or signed is stored the same way in the hardware register. It is the higher level c that interprets this value as either being a signed or unsigned number.
I am reverse engineering some old C, running under Win95 (yes, in production) appears to have been compiled with a Borland compiler (I don't have the tool chain).
There is a function which does (among other things) something like this:
static void unknown(int *value)
{
int v = *value;
v-=0x8000;
*value = v;
}
I can't quite work out what this does. I assume 'int' in this context is signed 32 bit. I think 0x8000 would be unsigned 32bit int, and outside the range of a signed 32 bit int. (edit - this is wrong, it is outside of a signed 16 bit int)
I am not sure if one of these would be cast first, and how the casting would handle overflows, and/or how the subtraction would handle the over flow.
I could try on a modern system, but I am also unsure if the results would be the same.
Edit for clarity:
1: 'v-=0x8000;' is straight from the original code, this is what makes little sense to me. v is defined as an int.
2: I have the code, this is not from asm.
3: The original code is very, very bad.
Edit: I have the answer! The answer below wasn't quite right, but it got me there (fix up and I'll mark it as the answer).
The data in v is coming from an ambiguous source, which actually seems to be sending unsigned 16 bit data, but it is being stored as a signed int. Latter on in the program all values are converted to floats and normalised to an average 0 point, so actual value doesn't matter, only order. Because we are looking at an unsigned int as a signed one, values over 32767 are incorrectly placed below 0, so this hack leaves the value as signed, but swaps the negative and positive numbers around (not changing order). End results is all numbers have the same order (but different values) as if they were unsigned in the first place.
(...and this is not the worst code example in this program)
In Borland C 3.x, int and short were the same: 16 bits. long was 32-bits.
A hex literal has the first type in which the value can be represented: int, unsigned int, long int or unsigned long int.
In the case of Borland C, 0x8000 is a decimal value of 32768 and won't fit in an int, but will in an unsigned int. So unsigned int it is.
The statement v -= 0x8000 ; is identical to v = v - 0x8000 ;
On the right-hand side, the int value v is implicitly cast to unsigned int, per the rules, the arithmetic operation is performed, yielding an rval that is an unsigned int. That unsigned int is then, again per the rules, implicitly cast back to the type of the lval.
So, by my estimation, the net effect is to toggle the sign bit — something that could be more easily and clearly done via simple bit-twiddling: *value ^= 0x8000 ;.
There is possibly a clue on this page http://www.ousob.com/ng/borcpp/nga0e24.php - Guide to Borland C++ 2.x ( with Turbo C )
There is no such thing as a negative numeric constant. If
a minus sign precedes a numeric constant it is treated as
the unary minus operator, which, along with the constant,
constitutes a numeric expression. This is important with
-32768, which, while it can be represented as an int,
actually has type long int, since 32768 has type long. To
get the desired result, you could use (int) -32768,
0x8000, or 0177777.
This implies the use of two's complement for negative numbers. Interestingly, the two's complement of 0x8000 is 0x8000 itself (as the value +32768 does not fit in the range for signed 2 byte ints).
So what does this mean for your function? Bit wise, this has the effect of toggling the sign bit, here are some examples:
f(0) = f(0x0000) = 0x8000 = -32768
f(1) = f(0x0001) = 0x8001 = -32767
f(0x8000) = 0
f(0x7fff) = 0xffff
It seems like this could be represented as val ^= 0x8000, but perhaps the XOR operator was not implemented in Borland back then?
My apologies if the question seems weird. I'm debugging my code and this seems to be the problem, but I'm not sure.
Thanks!
It depends on what you want the behaviour to be. An int cannot hold many of the values that an unsigned int can.
You can cast as usual:
int signedInt = (int) myUnsigned;
but this will cause problems if the unsigned value is past the max int can hold. This means half of the possible unsigned values will result in erroneous behaviour unless you specifically watch out for it.
You should probably reexamine how you store values in the first place if you're having to convert for no good reason.
EDIT: As mentioned by ProdigySim in the comments, the maximum value is platform dependent. But you can access it with INT_MAX and UINT_MAX.
For the usual 4-byte types:
4 bytes = (4*8) bits = 32 bits
If all 32 bits are used, as in unsigned, the maximum value will be 2^32 - 1, or 4,294,967,295.
A signed int effectively sacrifices one bit for the sign, so the maximum value will be 2^31 - 1, or 2,147,483,647. Note that this is half of the other value.
Unsigned int can be converted to signed (or vice-versa) by simple expression as shown below :
unsigned int z;
int y=5;
z= (unsigned int)y;
Though not targeted to the question, you would like to read following links :
signed to unsigned conversion in C - is it always safe?
performance of unsigned vs signed integers
Unsigned and signed values in C
What type-conversions are happening?
IMHO this question is an evergreen. As stated in various answers, the assignment of an unsigned value that is not in the range [0,INT_MAX] is implementation defined and might even raise a signal. If the unsigned value is considered to be a two's complement representation of a signed number, the probably most portable way is IMHO the way shown in the following code snippet:
#include <limits.h>
unsigned int u;
int i;
if (u <= (unsigned int)INT_MAX)
i = (int)u; /*(1)*/
else if (u >= (unsigned int)INT_MIN)
i = -(int)~u - 1; /*(2)*/
else
i = INT_MIN; /*(3)*/
Branch (1) is obvious and cannot invoke overflow or traps, since it
is value-preserving.
Branch (2) goes through some pains to avoid signed integer overflow
by taking the one's complement of the value by bit-wise NOT, casts it
to 'int' (which cannot overflow now), negates the value and subtracts
one, which can also not overflow here.
Branch (3) provides the poison we have to take on one's complement or
sign/magnitude targets, because the signed integer representation
range is smaller than the two's complement representation range.
This is likely to boil down to a simple move on a two's complement target; at least I've observed such with GCC and CLANG. Also branch (3) is unreachable on such a target -- if one wants to limit the execution to two's complement targets, the code could be condensed to
#include <limits.h>
unsigned int u;
int i;
if (u <= (unsigned int)INT_MAX)
i = (int)u; /*(1)*/
else
i = -(int)~u - 1; /*(2)*/
The recipe works with any signed/unsigned type pair, and the code is best put into a macro or inline function so the compiler/optimizer can sort it out. (In which case rewriting the recipe with a ternary operator is helpful. But it's less readable and therefore not a good way to explain the strategy.)
And yes, some of the casts to 'unsigned int' are redundant, but
they might help the casual reader
some compilers issue warnings on signed/unsigned compares, because the implicit cast causes some non-intuitive behavior by language design
If you have a variable unsigned int x;, you can convert it to an int using (int)x.
It's as simple as this:
unsigned int foo;
int bar = 10;
foo = (unsigned int)bar;
Or vice versa...
If an unsigned int and a (signed) int are used in the same expression, the signed int gets implicitly converted to unsigned. This is a rather dangerous feature of the C language, and one you therefore need to be aware of. It may or may not be the cause of your bug. If you want a more detailed answer, you'll have to post some code.
Some explain from C++Primer 5th Page 35
If we assign an out-of-range value to an object of unsigned type, the result is the remainder of the value modulo the number of values the target type can hold.
For example, an 8-bit unsigned char can hold values from 0 through 255, inclusive. If we assign a value outside the range, the compiler assigns the remainder of that value modulo 256.
unsigned char c = -1; // assuming 8-bit chars, c has value 255
If we assign an out-of-range value to an object of signed type, the result is undefined. The program might appear to work, it might crash, or it might produce garbage values.
Page 160:
If any operand is an unsigned type, the type to which the operands are converted depends on the relative sizes of the integral types on the machine.
...
When the signedness differs and the type of the unsigned operand is the same as or larger than that of the signed operand, the signed operand is converted to unsigned.
The remaining case is when the signed operand has a larger type than the unsigned operand. In this case, the result is machine dependent. If all values in the unsigned type fit in the large type, then the unsigned operand is converted to the signed type. If the values don't fit, then the signed operand is converted to the unsigned type.
For example, if the operands are long and unsigned int, and int and long have the same size, the length will be converted to unsigned int. If the long type has more bits, then the unsigned int will be converted to long.
I found reading this book is very helpful.