Bitmasking with ~0 bitshifting using variables and constants [duplicate] - c

This question already has answers here:
Shifting a 32 bit integer by 32 bits
(2 answers)
Closed 9 years ago.
I am currently practicing bitshifting in order to test my knowledge and cement my abilities in C, though I am currently running into a bug with C. This code represents my problem:
#include <stdio.h>
int main() {
int p = 32;
printf("%d", ~0 << 32);
printf("%d", ~0 << p);
return 0;
}
~0 << 32 is 0 (all zero bits), ~0 << p is -1 (all 1 bits). Why does C interpret these statements differently?

Because the literal 0 is of type int. When you perform ~0 you end up with a negative int. Left-shifting a negative int is undefined behavior. And left shifting past the width of an integer is also undefined behavior.
So the expected result is: anything.
Why a particular case of undefined behavior causes a certain thing to happen is nothing to dwell on or try to understand. You wrote some bugs and then the program stopped behaving in a predictable manner, simple as that.

Related

Bitwise operator gave me a negative number [duplicate]

This question already has answers here:
C bitwise negation creates negative output: [duplicate]
(3 answers)
Closed last year.
I just learned. How Bitwise operators work but when i try to use it in the c code i doesnt work.
#include <stdio.h>
int main()
{
int a = 7;
printf("%d\n", (int)~a);
return 0;
}
The expected output is 8 but it comes -8.
~0111
=
1000
Assuming int is 32 bit on your machine then 7 is 0b00000000000000000000000000000111 and ~7 becomes 0b11111111111111111111111111111000 which is -8.
Background
For signed values the most significant bit is used to determine if the value is negative or not. When the MSB is set, the value is negative.
In addition to that (char) 0b10000000 is -128 and (char) 0b11111111 is -1.
So counting works as follows:
0b10000000 0b10000001 [...] 0b11111111 0b00000000 [...] 0b01111111
-128 -127 [...] -1 0 [...] 127
That is the reason why you will get -128 when you count 127+1 and try to store that in a char.

Can anyone explaing how this answer is calculated? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
#include<stdio.h>
int main()
{
printf("%x", -1<<1);
getchar();
return 0;
}
Output:
Output is dependent on the compiler. For 32 bit compiler it would be fffffffe and for 16 bit it would be fffe.
This is from geek for geeks
-1 is a signed integer. A left shift on a signed integer with a negative value has undefined behavior according to the formal definition of the C language. This is part of the general rule that signed operations have undefined behavior when they overflow. the sign bit is set, and it must shift, but there's no room for it to go anywhere, so it overflows.
In practice, almost all platforms use two's complement representation for signed integers, and a left shift on a signed integer is treated as if the memory contained an unsigned integer. However, beware that compilers sometimes take advantage of the fact that this is undefined behavior to optimize in surprising ways.
-1 is all-bits-one, so a left shift drops the topmost bit and adds a 0 bit to the bottom. The result is 111…1110 in binary. If unsigned int is a 16-bit type, that's fffe in hexadecimal. If unsigned int is a 32-bit type, that's fffffffe. When that memory is read as a signed int, the value is -2 either way.
The %x specified requires an unsigned int as an argument. Passing an the signed version of the type is ok: it is converted to the unsigned value. The result of the conversion is 2^N - 2 where N is the number of bits in an unsigned int: as above, that's 0xfffe if N=16, 0xfffffffe if N=32.
It seems to me this answer is is just your 32-bit compiler casting (implicitly) -1 as long int for a signed integer, while the 16-bit compiler casts 1 to a "good old" int 16 bit integer.
Bad answer. A much better answer is given above by Gilles 'SO- stop being evil'. I am just editing this so it is not absolute non-sense.
As commented above, -1 is a signed integer, and left shifting (see here) a signed integer that is negative causes an undefined behavior because the sign bit has nowhere to go. In all the systems I have worked, that behavior is an overflow.
The reason why the results are different between 16-bit and 32-bit compilers might well just be the fact that 16-bit compilers use 16-bit integers, hence the 16 bit result fffe. I attached some code for you to try it out it find it would be useful.
Attached dirty test.
/* dirty_test.c
* This program left bit-shifts a signed integer and
* prints the byte content of the result to stdout.
*
* On *nix like systems, compile with
*
* cc dirty_test.c -o dirty_test.x
*/
#include <stdio.h>
int main(void)
{
int a;
// assigning a negative value causes the bit shift
// to produce an overflow.
a = -1;
printf("bytes before shift: %08x\n", a);
printf("bytes after shift: %08x\n", a << 1);
return 0;
}

why is 00000000 - 00000001 = 11111111 in C unsigned char data type? [duplicate]

This question already has answers here:
Question about C behaviour for unsigned integer underflow
(3 answers)
Closed 3 years ago.
I observed that, when a unsigned char variable stores the value 0 (000000002) and it gets decremented by 1 (000000012), the variable value turns into 255 (111111112), which is the highest value that a unsigned char variable can hold.
My question is: why 000000002 - 000000012 turns into 111111112? (I want to see the arithmetic behind it)
The C code in which i observed it was this one:
#include <stdio.h>
main(){
unsigned char c = 0;
unsigned char d = c - 1;
printf("%d\n%d", c, d);
}
When it runs, the following output is shown:
0
255
See here:
Unsigned integer arithmetic is always performed modulo 2n where n is
the number of bits in that particular integer. E.g. for unsigned int,
adding one to UINT_MAX gives ​0​, and subtracting one from ​0​ gives
UINT_MAX.
So in your example, since unsigned char is usually 8 bit, you get 28-1 = 255.

Why does -0x80000000 + -0x80000000 == 0? [duplicate]

This question already has answers here:
Integer overflow concept
(2 answers)
Integer overflow and undefined behavior
(4 answers)
Closed 6 years ago.
While reading a book about programmign tricks I saw that -0x80000000 + -0x80000000 = 0. This didn't make sense to me so I wrote a quick C program below to test and indeed the answer is 0:
#include <stdio.h>
int main()
{
int x = -0x80000000;
int y = -0x80000000;
int z = x + y;
printf("Z is: %d", z);
return 0;
}
Could anyone shed any light as to why? I saw something about an overflow, but I can't see how an overflow causes 0 rather than an exception or other error. I get no warning or anything.
What's happening here is signed integer overflow, which is undefined behavior because the exact representation of signed integers is not defined.
In practice however, most machine use 2's complement representation for signed integers, and this particular program exploits that.
0x80000000 is an unsigned integer constant. The - negates it, changing the expression to signed. Assuming int is 32-bit on your system, this value still fits. In fact, it is the smallest value a signed 32-bit int can hold, and the hexadecimal representation of this number happens to be 0x80000000.
When adding numbers in 2's complement representation, it has the feature that you don't need to worry about the sign. They are added exactly the same way as unsigned numbers.
So when we add x and y, we get this:
0x80000000
+ 0x80000000
-------------
0x100000000
Because an int on your system is 32-bit, only the lowest 32 bits are kept. And the value of those bits is 0.
Again note that this is actually undefined behavior. It works because your machine uses 2's complement representation for signed integers and int is 32-bit. This is common for most machines / compilers, but not all.
What you're seeing is a lot of implementation defined behavior, very likely triggering undefined behavior at runtime. More than that is not possible to know without details about your and book writers architecture.
The result isn't meaningful without additional information. If you want a definite answer, consult the type ranges for your architecture and make sure the results of assignments and arithmetic fit into their respective types.

How can 256 be represented by a char? [duplicate]

This question already has answers here:
what is char i=0x80 and why overflow did not happen in bit shifting
(6 answers)
Closed 9 years ago.
I ran the following code in xcode in to my surprise the answer was 256.
Since char is only 8 bits long i expected this to be 0.
Dumping the 1 in the 8 place.
Can someone explain what is going on?
int main()
{
unsigned char i = 0x80;
printf("%d\n", i<<1);
return 0;
}
It is being promoted to an integer, which can contain the value of 256 just fine. A cast to unsigned char will give you the result you expected:
printf("%d\n", (unsigned char)(i<<1) );
It is being promoted to an integer when you do i << 1. For example, SHL works on 32 bit words (even though your c type is 8 bit) in x86 so the shifted bit isn't discarded out of the 32 bit register. When you print this region with with "%d", you get 256. If you want to left shift with unsigned char, you can always do (i << x) & 0xff.
In this case, what is happening is that the intermediate result of i<<1 is an integer, which can represent 256.

Resources