I am taking a C final in a few hours, and I am going over past exams trying to make sure I understand problems I previously missed. I had the below question and I simply left it blank as I didn't know the answer and I moved on, and looking at it now I am not sure of what the answer would be... the question is;
signed short int c = 0xff00;
unsigned short int d, e;
c = c + '\xff';
d = c;
e = d >> 2;
printf("%4x, %4x, %4x\n",c,d,e);
We were asked to show what values would be printed? It is the addition of 'xff' which is throwing me off. I have solved similar problems in binary, but this hex representation is confusing me.
Could anyone explain to me what would happen here?
'\xff' is equivalent to all 1 in binary or -1 in signed int.
So initially c = 0xff00
c = c + '\xff'
In binary is
c = 1111 1111 0000 0000 + 1111 1111 1111 1111
Which yields signed short int
c = 1111 1110 1111 1111 (0xfeff)
c and d will be equal due to assignment but e is right shifted twice
e = 0011 1111 1011 1111 (0x3fbf)
I took the liberty to test this. In the code I added short int f assigned the value of c - 1.
unsigned short int c = 0xff00, f;
unsigned short int d, e;
f = c-1;
c = c + '\xff';
d = c;
e = (d >> 2);
printf("%4x, %4x, %4x, %4x\n",c,d,e,f);
And I get the same result for both c and f. f = c - 1 is not buffer overflow. c + '\xff' isn't buffer overflow either
feff, feff, 3fbf, feff
As noted by Zan Lynx, I was using unsigned short int in my sample code but the original post is signed short int. With signed int the output will have 4 extra f's.
0xff00 means the binary string 1111 1111 0000 0000.
'\xff' is a character with numeric code of 0xff and thus simply 1111 1111.
signed short int c = 0xff00;
is initializing c with out of range value (0xff00 = 65280 in decimal). This will cause to produce an erroneous result.
The first addition adds the 16-bit number, stored in c:
1111 1111 0000 0000
Plus the number that is coded as the value of the ASCII char enclosed between ' '. But in C you can specify a character as an hexadecimal code prefixed by \x like this '\xNN' where NN is a two hex digit number. The ASCII code of that character is the value of NN itself. So '\xFF' is a somewhat unusual way to say 0xFF.
The addition is to be performed using a signed short (16 bits, signed) plus a char (8 bits, signed). For it, the compiler promotes that 8-bit value to a 16-bit value, preserving the original sign by doing a sign-extension conversion.
So before the addition, 'xFF' is decoded as the 8-bit signed number 0xFF (1111 1111), which in turn is promoted to the 16-bit number 1111 1111 1111 1111 (the sign must be preserved)
The final addition is
1111 1111 0000 0000
1111 1111 1111 1111
-------------------
1111 1110 1111 1111
Which is the hexadecimal number 0xFEFF. That is the new value in variable c.
Then, there is d=c; dis unsigned short: it has the same size of a signed short, but sign is not considered here; the MSb is just another bit. As both variables have the same size, the value in d is exactly the same we had in c. That is:
d = 1111 1110 1111 1111
The difference is that any aritmetic or logical operation with this number won't take sign into account. This means, for example, that conversions that change the size of the number won't extend the sign.
e = d >> 2;
e gets the value of d shifted two bits to the right. The >> operator behaves differently depending upon the left operand is signed or not. If it is signed, the shifting is performed preserving the sign (bits entering the number from the left will have the same value as the original sign the number had before the shifting). If it is not, there will be zeroes entering from the left.
d is unsigned, so the value e gets is the result of shifting d two bits to the right, entering zeroes from the left:
e = 0011 1111 1011 1111
Which is 0x3FBF.
Finally, values printed are c,d,e:
0xFEFF, 0xFEFF, 0x3FBF
But you may see 0xFFFFFEFF as the first printed number. This is because %x expects an int, not a short. The 4 in "%4x" means: "use at least 4 digits to print the number, but if the amount of digits needed is more, use as much as needed". To print 0xFEFF as an int (32-bit int actually), it must be promoted again, and as it's signed, this is done with sign-extension. So 0xFEFF becomes 0xFFFFFEFF, which needs 8 digits to be printed, so it does.
The second and third %4x print unsigned values (d and e). These values are promoted to 32-bit ints, but this time, unsigned. So the second value is promoted to 0x0000FEFF and the third one, to 0x00003FBF. These two numbers don't actually need 8 digits to be printed, but 4, so it does so and you see only 4 digits for each number (try changing the two last %4x by %2x and you will see that the numbers are still printed with 4 digits)
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 4 years ago.
Improve this question
Hey I was trying to figure out why -1 << 4 (left) shift is FFF0 after looking and reading around web I came to know that "negative" numbers have a sign bit i.e 1. So just because "-1" would mean extra 1 bit ie (33) bits which isn't possible that's why we consider -1 as 1111 1111 1111 1111 1111 1111 1111 1111
For instance :-
#include<stdio.h>
void main()
{
printf("%x",-1<<4);
}
In this example we know that –
Internal representation of -1 is all 1’s 1111 1111 1111 1111 1111 1111 1111 1111 in an 32 bit compiler.
When we bitwise shift negative number by 4 bits to left least significant 4 bits are filled with 0’s
Format specifier %x prints specified integer value as hexadecimal format
After shifting 1111 1111 1111 1111 1111 1111 1111 0000 = FFFFFFF0 will be printed.
Source for the above
http://www.c4learn.com/c-programming/c-bitwise-shift-negative-number/
First, according to the C standard, the result of a left-shift on a signed variable with a negative value is undefined. So from a strict language-lawyer perspective, the answer to the question "why does -1 << 4 result in XYZ" is "because the standard does not specify what the result should be."
What your particular compiler is really doing, though, is left-shifting the two's-complement representation of -1 as if that representation were an unsigned value. Since the 32-bit two's-complement representation of -1 is 0xFFFFFFFF (or 11111111 11111111 11111111 11111111 in binary), the result of shifting left 4 bits is 0xFFFFFFF0 or 11111111 11111111 11111111 11110000. This is the result that gets stored back in the (signed) variable, and this value is the two's-complement representation of -16. If you were to print the result as an integer (%d) you'd get -16.
This is what most real-world compilers will do, but do not rely on it, because the C standard does not require it.
First thing first, the tutorials use void main. The comp.lang.c frequently asked question 11.15 should be of interest when assessing the quality of the tutorial:
Q: The book I've been using, C Programing for the Compleat Idiot, always uses void main().
A: Perhaps its author counts himself among the target audience. Many books unaccountably use void main() in examples, and assert that it's correct. They're wrong, or they're assuming that everyone writes code for systems where it happens to work.
That said, the rest of the example is ill-advised. The C standard does not define the behaviour of signed left shift. However, a compiler implementation is allowed to define behaviour for those cases that the standard leaves purposefully open. For example GCC does define that
all signed integers have two's-complement format
<< is well-defined on negative signed numbers and >> works as if by sign extension.
Hence, -1 << 4 on GCC is guaranteed to result in -16; the bit representation of these numbers, given 32 bit int are 1111 1111 1111 1111 1111 1111 1111 1111 and 1111 1111 1111 1111 1111 1111 1111 0000 respectively.
Now, there is another undefined behaviour here: %x expects an argument that is an unsigned int, however you're passing in a signed int, with a value that is not representable in an unsigned int. However, the behaviour on GCC / with common libc's most probably is that the bytes of the signed integer are interpreted as an unsigned integer, 1111 1111 1111 1111 1111 1111 1111 0000 in binary, which in hex is FFFFFFF0.
However, a portable C program should really never
assume two's complement representation - when the representation is of importance, use unsigned int or even uint32_t
assume that the << or >> on negative numbers have a certain behaviour
use %x with signed numbers
write void main.
A portable (C99, C11, C17) program for the same use case, with defined behaviour, would be
#include <stdio.h>
#include <inttypes.h>
int main(void)
{
printf("%" PRIx32, (uint32_t)-1 << 4);
}
I'm struggling with determining edge-sizes of each variable. I can't understand the following problem.
To get maximum value of char for example, I use: ~ 0 >> 1
Which should work like this:
transfer 0 to binary: 0000 0000 (I assume that char is stored on 8
bits)
negate it: 1111 1111 (now I'm out of char max size)
move one place right: 0111 1111 (I get 127 which seems to be correct)
Now I want to present this result using printf function.
Why exactly do I have to use cast like this:
printf("%d\n", (unsigned char)(~0) >> 1)?
I just don't get it. I assume that it has something to do with point 2 when I get out of char range, but I'm not sure.
I will be grateful if you present me more complex explanation to this problem.
Please don't use these kinds of tricks. They might work on ordinary machines but they are possibly unportable and hard to understand. Instead, use the symbolic constants from the header file limits.h which contains the size limits for each of the basic types. For instance, CHAR_MAX is the upper bound for a char, CHAR_MIN is the lower bound. Further limits for the numeric types declared in stddef.h and stdint.h can be found in stdint.h.
Now for your question: Arithmetic is done on values of type int by default, unless you cause the operands involved to have a different type. This happens for various reasons, like one of the variables involved having a different type or you using a iteral of different type (like 1.0 or 1L or 1U). Even more importantly, the type of an arithmetic expression promotes from the inside to the outside. Thus, in the statement
char c = 1 + 2 + 3;
The expression 1 + 2 + 3 is evaluated as type int and only converted to char immediately before assigning. Even more important is that in the C language, you can't do arithmetic on types smaller than int. For instance, in the expression c + 1 where c is of type char, the compiler inserts an implicit conversion from char to int before adding one to c. Thus, a statement like
c = c + 1;
actually behaves like this in C:
c = (char)((int)c + 1);
Thus, ~0 >> 1 actually evaluates to 0xffffffff (-1) on a usual 32 bit architecture because the type int usually has 32 bits and right shifting of signed types usually shifts sign bits so the most significant bit becomes a one. Casting to unsigned char cause truncation, with the result being 0xff (255). All arguments but the first to printf are part of a variable argument list which is a bit complicated but basically means that all types smaller than int are converted to int, float is converted to double and all other types are left unchanged.
Now, how can we get this right? On an ordinary machine with two's complement and no padding bits one could use expressions like these to compute the largest and smallest char, assuming sizeof (char) < sizeof (int):
(1 << CHAR_BIT - 1) - 1; /* largest char */
-(1 << CHAR_BIT - 1); /* smallest char */
For other types, this is going to be slightly more difficult since we need to avoid overflow. Here is an expression that works for all signed integer types on an ordinary machine, where type is the type you want to have the limits of:
(type)(((uintmax_t)1 << sizeof (type) * CHAR_BIT - 1) - 1) /* largest */
(type)-((uintmax_t)1 << sizeof (type) * CHAR_BIT - 1) /* smallest */
For an unsigned type type, you could use this to get the maximum:
~(type)0
Please notice that all these tricks should not appear in portable code.
The exact effect of your actions is different from what you assumed.
0 is not 0000 0000. 0 has type int, which means that it is most likely 0000 0000 0000 0000 0000 0000 0000 0000, depending on how many bits int has on your platform. (I will assume 32-bit int.)
Now, ~0 is, expectedly, 1111 1111 1111 1111 1111 1111 1111 1111, which still has type int and is a negative value.
When you shift it to the right, the result is implementation-defined. Right-shifting negative signed integer values in C does not guarantee that you will obtain 0 in the sign bit. Quite the opposite, most platforms will actually replicate the sign bit when right-shifting. Which means that ~0 >> 1 will still give you 1111 1111 1111 1111 1111 1111 1111 1111.
Note, that even if you do this on a platform that shifts-in a 0 into the sign bit when right-shifting negative values, you will still obtain 0111 1111 1111 1111 1111 1111 1111 1111, which is in general case not the maximum value of char you were trying to obtain.
If you want to make sure that a right-shift operation shifts-in 0 bits from the left, you have to either 1) shift an unsigned bit-pattern or 2) shift a signed, but positive bit-pattern. With negative bit patterns you risk running into the sign-extending behavior, meaning that for negative values 1 bits would be shifted-in from the left instead of 0 bits.
Since C language does not have shifts that would work in the domain of [unsigned/signed] char type (the operand is promoted to int anyway before the shift), what you can do is make sure that you are shifting a positive int value and make sure that your initial bit-mask has the correct number of 1s in it. That is exactly what you achieve by using (unsigned char) ~0 as the initial mask. (unsigned char) ~0 will participate in the shift as a value of type int equal to 0000 0000 0000 0000 0000 0000 1111 1111 (assuming 8-bit char). After the shift you will obtain 0000 0000 0000 0000 0000 0000 0111 1111, which is exactly what you were trying to obtain.
That only works with unsigned integers. For signed integers, right shifting a negative number and the behaviour of in bit-wise inversion is implementation defined. Not only it depends on the representation of negative values, but also what CPU instruction the compiler uses to perform the right-shift (some CPUs do not have arithmetic (right) shift for instance.
So, unless you make additional constraints for your implementation, it is not possible to determine the limits of signed integers. This implies there is no completely portable way (for signed integers).
Note that whether char is signed or unsigned is also implementation defined and that (unsigned char)(~0) >> 1 is subject to integer promotions, so it will not yield a character result, but an int. (which makes the format specifier correct - allthough presumably unintended).
Use limits.h to get macros for your implementation's integer limits. This file has to be provided by any standard-compliant C compiler.
I have the following code:
unsigned char chr = 234; // 1110 1010
unsigned long result = 0;
result = chr << 24;
And now result will equal 18446744073340452864, which is 1111 1111 1111 1111 1111 1111 1111 1111 1110 1010 0000 0000 0000 0000 0000 0000 in binary.
Why is there sign extension being done, when chr is unsigned?
Also if I change the shift from 24 to 8 then result is 59904 which is 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 1110 1010 0000 0000 in binary. Why here is there no extension done here? (Any shift 23 or less doesn't have sign extension done to it)
Also on my current platform sizeof(long) is 8.
What are the rules for automatically casting to larger size types when shifting? It seems to me that if the shift is 23 or less than the chr gets casted to an unsigned type and if it's 24 or more it gets casted to a signed type? (And why is sign extension even being done at all with a left shift)
With chr = 234, the expression chr << 24 is evaluated in isolation: chr is promoted to (a 32-bit signed) int and shifted left 24 bits, yielding a negative int value. When you assign to a 64-bit unsigned long, the sign-bit is propagated through the most significant 32 bits of the 64-bit value. Note that the method of calculating chr << 24 is not itself affected by what the value is assigned to.
When the shift is just 8 bits, the result is a positive (32-bit signed) integer, and that sign bit (0) is propagated through the most significant 32-bits of the unsigned long.
To understand this it's easiest to think in terms of values.
Each integral type has a fixed range of representable values. For example, unsigned char usually ranges from 0 to 255 ; other ranges are possible and you can find your compiler's choice by checking UCHAR_MAX in limits.h.
When doing a conversion between integral types; if the value is representable in the destination type, then the result of the conversion is that value. (This may be a different bit-pattern, e.g. sign extension).
If the value is not representable in the destination type then:
for signed destinations, the behaviour is implementation-defined (which may include raising a signal).
for unsigned destinations, the value is adjusted modulo the maximum value representable in the type, plus one.
Modern systems handle the signed out-of-range assignment by left-truncating excessive bits; and if it is still out-of-range then it retains the same bit-pattern, but the value changes to whatever value that bit-pattern represents in the destination type.
Moving onto your actual example.
In C, there is something called the integral promotions. With <<, this happens to the left-hand operand; with the arithmetic operators it happens to all operands. The effect of integral promotions is that any value of a type smaller than int is converted to the same value with type int.
Further, the definition of << 24 is multiplication by 2^24 (where this has the type of the promoted left operand), with undefined behaviour if this overflows. (Informally: shifting into the sign bit causes UB).
So, putting all the conversions explicitly, your code is
result = (unsigned long) ( ((int)chr) * 16777216 )
Now, the result of this calculation is 3925868544 , which if you are on a typical system with 32-bit int, is greater than INT_MAX which is 2147483647, so the behaviour is undefined.
If we want to explore results of this undefined behaviour on typical systems: what may happen is the same procedure I outlined earlier for out-of-range assignment. The bit-pattern of 3925868544 is of course 1110 1010 0000 0000 0000 0000 0000 0000. Treating this as the pattern of an int using 2's complement gives the int -369098752.
Finally we have the conversion of this value to unsigned long. -369098752 is out of range for unsigned long; and the rule for an unsigned destination is to adjust the value modulo ULONG_MAX+1. So the value you are seeing is 18446744073709551615 + 1 - 369098752.
If your intent was to do the calculation in unsigned long precision, you need to make one of the operands unsigned long; e.g. do ((unsigned long)chr) << 24. (Note: 24ul won't work, the type of the right-hand operand of << or >> does not affect the left-hand operand).
I'm right-shifting -109 by 5 bits, and I expect -3, because
-109 = -1101101 (binary)
shift right by 5 bits
-1101101 >>5 = -11 (binary) = -3
But, I am getting -4 instead.
Could someone explain what's wrong?
Code I used:
int16_t a = -109;
int16_t b = a >> 5;
printf("%d %d\n", a,b);
I used GCC on linux, and clang on osx, same result.
The thing is you are not considering negative numbers representation correctly. With right shifting, the type of shift (arithmetic or logical) depends on the type of the value being shifted. If you cast your value to an unsigned value, you might get what you are expecting:
int16_t b = ((unsigned int)a) >> 5;
You are using -109 (16 bits) in your example. 109 in bits is:
00000000 01101101
If you take's 109 2's complement you get:
11111111 10010011
Then, you are right shifting by 5 the number 11111111 10010011:
__int16_t a = -109;
__int16_t b = a >> 5; // arithmetic shifting
__int16_t c = ((__uint16_t)a) >> 5; // logical shifting
printf("%d %d %d\n", a,b,c);
Will yield:
-109 -4 2044
The result of right shifting a negative value is implementation defined behavior, from the C99 draft standard section 6.5.7 Bitwise shift operators paragraph 5 which says (emphasis mine going forward):
The result of E1 >> E2 is E1 right-shifted E2 bit positions. If E1 has an unsigned type
or if E1 has a signed type and a nonnegative value, the value of the result is the integral
part of the quotient of E1 / 2E2. If E1 has a signed type and a negative value, the
resulting value is implementation-defined.
If we look at gcc C Implementation-defined behavior documents under the Integers section it says:
The results of some bitwise operations on signed integers (C90 6.3, C99 and C11 6.5).
Bitwise operators act on the representation of the value including both the sign and value bits, where the sign bit is considered immediately above the highest-value value bit. Signed ‘>>’ acts on negative numbers by sign extension.
That's pretty clear what's happening, when representing signed integers, negative integers have a property which is, sign extension, and the left most significant bit is the sign bit.
So, 1000 ... 0000 (32 bit) is the biggest negative number that you can represent, with 32 bits.
Because of this, when you have a negative number and you shift right, a thing called sign extension happens, which means that the left most significant bit is extended, in simple terms it means that, for a number like -109 this is what happens:
Before shifting you have (16bit):
1111 1111 1001 0011
Then you shift 5 bits right (after the pipe are the discarded bits):
1XXX X111 1111 1100 | 1 0011
The X's are the new spaces that appear in your integer bit representation, that due to the sign extension, are filled with 1's, which give you:
1111 1111 1111 1100 | 1 0011
So by shifting: -109 >> 5, you get -4 (1111 .... 1100) and not -3.
Confirming results with the 1's complement:
+3 = 0... 0000 0011
-3 = ~(0... 0000 0011) + 1 = 1... 1111 1100 + 1 = 1... 1111 1101
+4 = 0... 0000 0100
-4 = ~(0... 0000 0100) + 1 = 1... 1111 1011 + 1 = 1... 1111 1100
Note: Remember that the 1's complement is just like the 2's complement with the diference that you first must negate the bits of positive number and only then sum +1.
Pablo's answer is essentially correct, but there are two small bits (no pun intended!) that may help you see what's going on.
C (like pretty much every other language) uses what's called two's complement, which is simply a different way of representing negative numbers (it's used to avoid the problems that come up with other ways of handling negative numbers in binary with a fixed number of digits). There is a conversion process to turn a positive number in two's complement (which looks just like any other number in binary - except that the furthest most left bit must be 0 in a positive number; it's basically the sign place-holder) is reasonably simple computationally:
Take your number
00000000 01101101 (It has 0s padding it to the left because it's 16 bits. If it was long, it'd be padded with more zeros, etc.)
Flip the bits
11111111 10010010
Add one.
11111111 10010011.
This is the two's complement number that Pablo was referring to. It's how C holds -109, bitwise.
When you logically shift it to the right by five bits you would APPEAR to get
00000111 11111100.
This number is most definitely not -4. (It doesn't have a 1 in the first bit, so it's not negative, and it's way too large to be 4 in magnitude.) Why is C giving you negative 4 then?
The reason is basically that the ISO implementation for C doesn't specify how a given compiler needs to treat bit-shifting in negative numbers. GCC does what's called sign extension: the idea is to basically pad the left bits with 1s (if the initial number was negative before shifting), or 0s (if the initial number was positive before shifting).
So instead of the 5 zeros that happened in the above bit-shift, you instead get:
11111111 11111100. That number is in fact negative 4! (Which is what you were consistently getting as a result.)
To see that that is in fact -4, you can just convert it back to a positive number using the two's complement method again:
00000000 00000011 (bits flipped)
00000000 00000100 (add one).
That's four alright, so your original number (11111111 11111100) was -4.
I am trying to understand why my program
#include<stdio.h>
void main()
{
printf("%x",-1<<4);
}
prints fffffff0.
What is this program doing, and what does the << operator do?
The << operator is the left shift operator; the result of a<<b is a shifted to the left of b bits.
The problem with your code is that you are left-shifting a negative integer, and this results in undefined behavior (although your compiler may place some guarantees on this operation); also, %x is used to print in hexadecimal an unsigned integer, and you are feeding to it a signed integer - again undefined behavior.
As for why you are seeing what you are seeing: on 2's complement architectures -1 is represented as "all ones"; so, on a computer with 32-bit int you'll have:
11111111111111111111111111111111 = -1 (if interpreted as a signed integer)
now, if you shift this to the left of 4 positions, you get:
11111111111111111111111111110000
The %x specifier makes printf interprets this stuff as an unsigned integer, which, in hexadecimal notation, is 0xfffffff0. This is easy to understand, since 4 binary digits equal a single hexadecimal digit; the 1111 groups in binary become f in hex, the last 0000 in binary is that last 0 in hex.
Again, all this behavior hereby explained is just the way your specific compiler works, as far as the C standard is concerned this is all UB. This is very intentional: historically different platforms had different ways to represent negative numbers, and the shift instructions of various processors have different subtleties, so the "defined behavior" we get for shift operators is more or less the "safe subset" common to most "normal" architectures.
It means take bit representation of -1 and shift it to the left 4 times
This means take
11111111 11111111 11111111 11111111 = ffffffff
And shift:
11111111 11111111 11111111 11110000 = fffffff0
The "%x" format specifier means print it out in hexadecimal notation.
It's the left shift binary operator. You're left shifting -1 by 4 bits:
-1 == 1111 1111 1111 1111 1111 1111 1111 1111(2) == FFFFFFFF
-1 << 4 == 1111 1111 1111 1111 1111 1111 1111 0000(2) == FFFFFFF0
"%x" means your integer will be displayed in hexadecimal value.
-1 << 4 means that the binary value of "-1" will be shifted of 4 bits
"<<" is left shift operator. -1<<4 means left shifting -1 by 4. Since -1 is 0xffffffff, you will get 0xfffffff0
More can be found in wiki http://en.wikipedia.org/wiki/Bitwise_operation