Can anyone explaing how this answer is calculated? [closed] - c

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
#include<stdio.h>
int main()
{
printf("%x", -1<<1);
getchar();
return 0;
}
Output:
Output is dependent on the compiler. For 32 bit compiler it would be fffffffe and for 16 bit it would be fffe.
This is from geek for geeks

-1 is a signed integer. A left shift on a signed integer with a negative value has undefined behavior according to the formal definition of the C language. This is part of the general rule that signed operations have undefined behavior when they overflow. the sign bit is set, and it must shift, but there's no room for it to go anywhere, so it overflows.
In practice, almost all platforms use two's complement representation for signed integers, and a left shift on a signed integer is treated as if the memory contained an unsigned integer. However, beware that compilers sometimes take advantage of the fact that this is undefined behavior to optimize in surprising ways.
-1 is all-bits-one, so a left shift drops the topmost bit and adds a 0 bit to the bottom. The result is 111…1110 in binary. If unsigned int is a 16-bit type, that's fffe in hexadecimal. If unsigned int is a 32-bit type, that's fffffffe. When that memory is read as a signed int, the value is -2 either way.
The %x specified requires an unsigned int as an argument. Passing an the signed version of the type is ok: it is converted to the unsigned value. The result of the conversion is 2^N - 2 where N is the number of bits in an unsigned int: as above, that's 0xfffe if N=16, 0xfffffffe if N=32.

It seems to me this answer is is just your 32-bit compiler casting (implicitly) -1 as long int for a signed integer, while the 16-bit compiler casts 1 to a "good old" int 16 bit integer.
Bad answer. A much better answer is given above by Gilles 'SO- stop being evil'. I am just editing this so it is not absolute non-sense.
As commented above, -1 is a signed integer, and left shifting (see here) a signed integer that is negative causes an undefined behavior because the sign bit has nowhere to go. In all the systems I have worked, that behavior is an overflow.
The reason why the results are different between 16-bit and 32-bit compilers might well just be the fact that 16-bit compilers use 16-bit integers, hence the 16 bit result fffe. I attached some code for you to try it out it find it would be useful.
Attached dirty test.
/* dirty_test.c
* This program left bit-shifts a signed integer and
* prints the byte content of the result to stdout.
*
* On *nix like systems, compile with
*
* cc dirty_test.c -o dirty_test.x
*/
#include <stdio.h>
int main(void)
{
int a;
// assigning a negative value causes the bit shift
// to produce an overflow.
a = -1;
printf("bytes before shift: %08x\n", a);
printf("bytes after shift: %08x\n", a << 1);
return 0;
}

Related

Why can't I use ~0 to initialize signed char? [closed]

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 3 years ago.
Improve this question
I wrote like this:
#include <stdio.h>
#include <limits.h>
main()
{
signed char i = ~0;
printf("%zu\n", i);
return 0;
}
But the result is 4294967295 instead of -1. Why does this happen?
When you use a signed char value in an expression, it is automatically converted to int. Then, when you print a signed int using a format specifier %zu, which is for formatting an unsigned size_t, you get erroneous results because of the mismatch.
To print a signed char correctly, using %hhd.
Additionally, you should understand that while a signed char value of −1 might be represented with the eight bits 11111111 (in eight-bit two’s complement), when it is converted to int, the result is the 32 bits 11111111111111111111111111111111 (which represents −1 using 32-bit two’s complement). When those bits are interpreted as a 32-bit unsigned integer, the result is 4294967295. That explains why “4294967295” may be printed rather than the “255” that would result from interpreting the eight-bit 11111111 as an unsigned integer.
you are printing i value using %zu. In unisgned format -1=2^32-1 which is equals to
4294967295. Compiler uses the formula 2^(sizeoftype in bits)-n to convert signed values to unsigned.

Unsigned integer underflow in C

I've seen multiple questions on the site addressing unsigned integer overflow/underflow.
Most of the questions about underflow ask about assigning a negative number to an unsigned integer; what's unclear to me is what happens when an unsigned int is subtracted from another unsigned int e.g. a - b where the result is negative. The relevant part of the standard is:
A computation involving unsigned operands can never overflow, because a result that cannot be represented by the resulting unsigned integer type is reduced modulo the number that is one greater than the largest value that can be represented by the resulting type.
In this context how do you interpret "reduced"? Does it mean that UINT_MAX+1 is added to the negative result until it is >= 0?
I see that the main point is addressed by this question (which basically says that the standard chooses to talk about overflow but the main point about modulo holds for underflow too) but it's still unclear to me:
Say the result of a-b is -1; According to the standard, the operation -1%(UINT_MAX+1) will return -1 (as is explained here); so we're back to where we started.
This may be overly pedantic, but does this modulo mean a mathematical modulo as opposed to C's computational modulo?
Firstly, a result that is below the minimum value of the given integer type is not called "underflow" in C. The term "underflow" is reserved for floating-point types and means something completely different. Going out of range of an integer type is always overflow, regardless of which end of the range you cross. So the fact that you don't see the language specification talking about "underflow" doers not really mean anything in this case.
Secondly, you are absolutely right about the meaning of the word "reduced". The final value is defined by adding (or subtracting) UINT_MAX+1 from the "mathematical" result until it returns into the range of unsigned int. This is also the same thing as Euclidean "modulo" operation.
The part of the standard you posted talks about overflow, not underflow.
"Does it mean that UINT_MAX+1 is added to the negative result until it is >= 0?"
You can think that's what happens. Abstractly the result will be the same. A similar question has already been asked about it. Check this link: Question about C behaviour for unsigned integer underflow for more details.
Another way to think is that, for example, -1 is in principle from type int (that is 4 bytes, in which all bits are 1). Then, when you tell the program to interpret all these bits 1 as unsigned int, its value will be interpreted as UINT_MAX.
Under the hood, addition, or subtraction is bit wise and sign independent. The code generated could use the same instructions independent of whether it is signed or not. It is other operators that interpret the result, for example a > 0. Do the bit wise add or sub and this tells you the answer. b0 - b1 = b111111111 the answer is the same independent of the sign. It is only other operators that see the answer as -1 for signed types and 0xFF for unsigned types. The standard describes this behaviour, but I always find it easiest to remember how it works and deduce the consequences to the code I am writing.
signed int adds(signed int a, signed int b)
{
return a + b;
}
unsigned int addu(unsigned a, unsigned b)
{
return a + b;
}
int main() {
return 0;
}
->
adds(int, int):
lea eax, [rdi+rsi]
ret
addu(unsigned int, unsigned int):
lea eax, [rdi+rsi]
ret
main:
xor eax, eax
ret

How does hexadecimal to %x work?

I am learning in C and I got a question regarding this conversion.
short int x = -0x52ea;
printf ( "%x", x );
output:
ffffad16
I would like to know how this conversion works because it's supposed to be on a test and we won't be able to use any compilers. Thank you
I would like to know how this conversion works
It is undefined behavior (UB)
short int x = -0x52ea;
0x52ea is a hexadecimal constant. It has the value of 52EA16, or 21,22610. It has type int as it fits in an int, even if int was 16 bit. OP's int is evidently 32-bit.
- negates the value to -21,226.
The value is assigned to a short int which can encode -21,226, so no special issues with assigning this int to a short int.
printf("%x", x );
short int x is passed to a ... function, so goes through the default argument
promotions and becomes an int. So an int with the value -21,226 is passed.
"%x" used with printf(), expects an unsigned argument. Since the type passed is not an unsigned (and not an int with a non-negative value - See exception C11dr §6.5.2.2 6), the result is undefined behavior (UB). Apparently the UB on your machine was to print the hex pattern of a 32-bit 2's complement of -21,226 or FFFFAD16.
If the exam result is anything but UB, just smile and nod and realize the curriculum needs updating.
The point here is that when a number is negative, it's structured in a completely different way.
1 in 16-bit hexadecimal is 0001, -1 is ffff. The most relevant bit (8000) indicates that it's a negative number (admitting it's a signed integer), and that's why it can only go as positive as 32767 (7fff), and as negative as -32768 (8000).
Basically to transform from positive to negative, you invert all bits and sum 1. 0001 inverted is fffe, +1 = ffff.
This is a convention called Two's complement and it's used because it's quite trivial to do arithmetic using bitwise operations when you use it.

Why does -0x80000000 + -0x80000000 == 0? [duplicate]

This question already has answers here:
Integer overflow concept
(2 answers)
Integer overflow and undefined behavior
(4 answers)
Closed 6 years ago.
While reading a book about programmign tricks I saw that -0x80000000 + -0x80000000 = 0. This didn't make sense to me so I wrote a quick C program below to test and indeed the answer is 0:
#include <stdio.h>
int main()
{
int x = -0x80000000;
int y = -0x80000000;
int z = x + y;
printf("Z is: %d", z);
return 0;
}
Could anyone shed any light as to why? I saw something about an overflow, but I can't see how an overflow causes 0 rather than an exception or other error. I get no warning or anything.
What's happening here is signed integer overflow, which is undefined behavior because the exact representation of signed integers is not defined.
In practice however, most machine use 2's complement representation for signed integers, and this particular program exploits that.
0x80000000 is an unsigned integer constant. The - negates it, changing the expression to signed. Assuming int is 32-bit on your system, this value still fits. In fact, it is the smallest value a signed 32-bit int can hold, and the hexadecimal representation of this number happens to be 0x80000000.
When adding numbers in 2's complement representation, it has the feature that you don't need to worry about the sign. They are added exactly the same way as unsigned numbers.
So when we add x and y, we get this:
0x80000000
+ 0x80000000
-------------
0x100000000
Because an int on your system is 32-bit, only the lowest 32 bits are kept. And the value of those bits is 0.
Again note that this is actually undefined behavior. It works because your machine uses 2's complement representation for signed integers and int is 32-bit. This is common for most machines / compilers, but not all.
What you're seeing is a lot of implementation defined behavior, very likely triggering undefined behavior at runtime. More than that is not possible to know without details about your and book writers architecture.
The result isn't meaningful without additional information. If you want a definite answer, consult the type ranges for your architecture and make sure the results of assignments and arithmetic fit into their respective types.

unsigned int T = 0xFFFFFFFF equals to -1 in C?? it should not work as 2 compliment right after unsigned? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
unsigned int is the same as uint32_t? which is 32 bits of unsigned integers and not 2 compliment right?
uint32_t y = 0xFFFFFFFF; // gives -1 i dont get why is it negative?
uint8_t x = 0b11111111; // gives 255 i understand this
The bit pattern you are showing represents -1 for a signed integer, and the maximum value if it is representing an unsigned integer.
When you say it "gives -1", you should look at how you check that. If, for example, you print it with printf, you should make sure to use an unsigned format specifier.
uint32_t y = 0xFFFFFFFF; // gives -1 i dont get why is it negative?
The reason why it is negative is, because, by definition, any number where the highes sigfnificant bit is set, is to be interpreted as negative, when a signed type is used. In your case the value would be normally not considered to be negative, as uint is considered unsigned.
Consequently this means that for example:
0xFFFF is -1 when considered as a 16bit integer, but the same value is not negative when using a 32 bit integer.
The compiler is extending the signedness if the appropriate conversions are used. So the above value, would still become -1 if it is appropriately promoted to a 32bit int.

Resources