This question already has an answer here:
Casting a large number type to a smaller type
(1 answer)
Closed 7 years ago.
Let's say I have the following code lines:
int a; // 4-byte-integer
char b, c, d, e;
b = (char)(a >> 24);
c = (char)(a >> 16);
d = (char)(a >> 8);
e = (char)a;
Let's also assume that the system is storing the bytes in little-endian mode and a = 100.
When using the explicit cast like that, do the left-most bytes disappear?
I guess that after executing the above lines, the variables will hold these values: b=100, c=0, d=0, e=0. Is it right?
You guess right! But your explanation is not completely correct:
The behavior of the above code does not depend on the endianness of the system: if int is 32 bits and char 8 bits, a >> 24 is the high order byte and a & 255 the low order byte, for all possible endianness possibilities.
explicit casts as (char) are not needed, because C does implicit conversion of the expression value to the type of the assignment destination. I suppose the programmer wrote it this way to silence a compiler warning. Microsoft compilers are notoriously vocal about losing precision in assignments.
the leftmost bytes do not disappear, the value is computed modulo the size of char, hopefully 8 bits in your case, so (char)a is essentially the same as a & 255. But if char is signed, this behavior is not actually well defined by the Standard if the value exceeds CHAR_MAX. It is wise to use unsigned types for this kind of bit manipulation.
Related
I need help solving this problem in my mind, so if anyone had a similar problem it would help me.
Here's my code:
char c=0xAB;
printf("01:%x\n", c<<2);
printf("02:%x\n", c<<=2);
printf("03:%x\n", c<<=2);
Why the program prints:
01:fffffeac
02:ffffffac
03:ffffffb0
What I expected to print, that is, what I got on paper is:
01:fffffeac
02:fffffeac
03:fffffab0
I obviously realized I didn't know what the operator <<= was doing, I thought c = c << 2.
If anyone can clarify this, I would be grateful.
You're correct in thinking that
c <<= 2
is equivalent to
c = c << 2
But you have to remember that c is a single byte (on almost all systems), it can only contain eight bits, while a value like 0xeac requires 12 bits.
When the value 0xeac is assigned back to c then the value will be truncated and the top bits will simply be ignored, leaving you with 0xac (which when promoted to an int becomes 0xffffffac).
<<= means shift and assign. It's the compound assignment version of c = c << 2;.
There's several problems here:
char c=0xAB; is not guaranteed to give a positive result, since char could be an 8 bit signed type. See Is char signed or unsigned by default?. In which case 0xAB will get translated to a negative number in an implementation-defined way. Avoid this bug by always using uint8_t when dealing with raw binary bytes.
c<<2 is subject to Implicit type promotion rules - specifically c will get promoted to a signed int. If the previous issue occured where your char got a negative value, c now holds a negative int.
Left-shifting negative values in C invokes undefined behavior - it is always a bug. Shifting signed operands in general is almost never correct.
%x isn't a suitable format specifier to print the int you ended up with, nor is it suitable for char.
As for how to fix the code, it depends on what you wish to achieve. It's recommended to cast to uint32 before shifting.
Let uint8 and uint16 be datatypes for 8bit and 16bit positive integers.
uint8 a = 1;
uint16 b = a << 8;
I tested this program on 32Bit architecture with result
b = 256
Would the same programm on a system with registers of 8bit length yield the result:
b = 0 ?
because all bits in register gets shifted to 0 by a << 8?
Registers are irrelevant. This is about the width of your types.
When you shift a value by more bits than it possesses, the behaviour is undefined. The compiler, the program, the computer, the tax office can legally manifest any results accordingly. And, no, that's not just theoretical.
However, operands in C are promoted before interesting things are done on them. So, your uint8_t becomes an int before the left-shift.
Now it depends on your architecture (as determined by your compiler configuration) as to what happens: is int on your implementation only 8-bit? No, it's not! The result, then — regardless of any "register size" — must abide by the rules of the language, yielding the mathematically appropriate answer (256). And, even if it were, you'd hit that undefined behaviour so the question would be moot.
Under the bonnet, if more than one register is needed to hold a variable, then that's what will and must happen (at whatever performance cost is implied as a result). That's if a register is used at all; remember, you're programming in an abstraction, not hand-crafting machine code. The program snippet you showed can be completely optimised away during compilation and doesn't require any runtime instructions at all.
Would the same programm on a system with registers of 8bit length the result be b=0?
No.
In the expression a << 8 the variable a will get promoted to an int before the bit shift. And an int is guaranteed to be at least 16 bits.
b will have the value 256 on all platforms unless there's a bug in the compiler.
However, if you changed the second line to uint32 b = a << 16; you might get strange results. a would still get promoted to an int, but if int is two bytes long, then a << 16 will invoke undefined behavior.
This question already has answers here:
size of int variable
(6 answers)
Closed 5 years ago.
This is a bit of a general question and not completely related to the c programming language but it's what I'm on studying at the moment.
Why does an integer take up 4 bytes or How ever many bytes dependant on the system?
Why does it not take up 1 byte per integer?
For example why does the following take up 8 bytes:
int a = 1;
int b = 1;
Thanks
I am not sure whether you are asking why int objects have fixed sizes instead of variable sizes or whether you are asking why int objects have the fixed sizes they do. This answers the former.
We do not want the basic types to have variable lengths. That makes it very complicated to work with them.
We want them to have fixed lengths, because then it is much easier to generate instructions to operate on them. Also, the operations will be faster.
If the size of an int were variable, consider what happens when you do:
b = 3;
b += 100000;
scanf("%d", &b);
When b is first assigned, only one byte is needed. Then, when the addition is performed, the compiler needs more space. But b might have neighbors in memory, so the compiler cannot just grow it in place. It has to release the old memory and allocate new memory somewhere.
Then, when we do the scanf, the compiler does not know how much data is coming. scanf will have to do some very complicated work to grow b over and over again as it reads more digits. And, when it is done, how does it let you know where the new b is? The compiler has to have some mechanism to update the location for b. This is hard and complicated and will cause additional problems.
In contrast, if b has a fixed size of four bytes, this is easy. For the assignment, write 3 to b. For the addition, add 100000 to the value in b and write the result to b. For the scanf, pass the address of b to scanf and let it write the new value to b. This is easy.
The basic integral type int is guaranteed to have at least 16 bits; At least means that compilers/architectures may also provide more bits, and on 32/64 bit systems int will most likely comprise 32 bits or 64 bits (i.e. 4 bytes or 8 bytes), respectively (cf, for example, cppreference.com):
Integer types
... int (also accessible as signed int): This is the most optimal
integer type for the platform, and is guaranteed to be at least 16
bits. Most current systems use 32 bits (see Data models below).
If you want an integral type with exactly 8 bits, use the int8_t or uint8_t.
It doesn't. It's implementation-defined. A signed int in gcc on an Atmel 8-bit microcontroller, for example, is a 16-bit integer. An unsigned int is also 16-bits, but from 0-65535 since it's unsigned.
The fact that an int uses a fixed number of bytes (such as 4) is a compiler/CPU efficiency and limitation, designed to make common integer operations fast and efficient.
There are types (such as BigInteger in Java) that take a variable amount of space. These types would have 2 fields, the first being the number of words being used to represent the integer, and the second being the array of words. You could define your own VarInt type, something like:
struct VarInt {
char length;
char bytes[]; // Variable length
}
VarInt one = {1, {1}}; // 2 bytes
VarInt v257 = {2, {1,1}}; // 3 bytes
VarInt v65537 = {4, {1,0,0,1}}; // 5 bytes
and so on, but this would not be very fast to perform arithmetic on. You'd have to decide how you would want to treat overflow; resizing the storage would require dynamic memory allocation.
I'm trying to translate this snippet :
ntohs(*(UInt16*)VALUE) / 4.0
and some other ones, looking alike, from C to Swift.
Problem is, I have very few knowledge of Swift and I just can't understand what this snippet does... Here's all I know :
ntohs swap endianness to host endianness
VALUE is a char[32]
I just discovered that Swift : (UInt(data.0) << 6) + (UInt(data.1) >> 2) does the same thing. Could one please explain ?
I'm willing to return a Swift Uint (UInt64)
Thanks !
VALUE is a pointer to 32 bytes (char[32]).
The pointer is cast to UInt16 pointer. That means the first two bytes of VALUE are being interpreted as UInt16 (2 bytes).
* will dereference the pointer. We get the two bytes of VALUE as a 16-bit number. However it has net endianness (net byte order), so we cannot make integer operations on it.
We now swap the endianness to host, we get a normal integer.
We divide the integer by 4.0.
To do the same in Swift, let's just compose the byte values to an integer.
let host = (UInt(data.0) << 8) | UInt(data.1)
Note that to divide by 4.0 you will have to convert the integer to Float.
The C you quote is technically incorrect, although it will be compiled as intended by most production C compilers.¹ A better way to achieve the same effect, which should also be easier to translate to Swift, is
unsigned int val = ((((unsigned int)VALUE[0]) << 8) | // ² ³
(((unsigned int)VALUE[1]) << 0)); // ⁴
double scaledval = ((double)val) / 4.0; // ⁵
The first statement reads the first two bytes of VALUE, interprets them as a 16-bit unsigned number in network byte order, and converts them to host byte order (whether or not those byte orders are different). The second statement converts the number to double and scales it.
¹ Specifically, *(UInt16*)VALUE provokes undefined behavior because it violates the type-based aliasing rules, which are asymmetric: a pointer with character type may be used to access an object with any type, but a pointer with any other type may not be used to access an object with (array-of-)character type.
² In C, a cast to unsigned int here is necessary in order to make the subsequent shifting and or-ing happen in an unsigned type. If you cast to uint16_t, which might seem more appropriate, the "usual arithmetic conversions" would then convert it to int, which is signed, before doing the left shift. This would provoke undefined behavior on a system where int was only 16 bits wide (you're not allowed to shift into the sign bit). Swift almost certainly has completely different rules for arithmetic on types with small ranges; you'll probably need to cast to something before the shift, but I cannot tell you what.
³ I have over-parenthesized this expression so that the order of operations will be clear even if you aren't terribly familiar with C.
⁴ Left shifting by zero bits has no effect; it is only included for parallel structure.
⁵ An explicit conversion to double before the division operation is not necessary in C, but it is in Swift, so I have written it that way here.
It looks like the code is taking the single byte value[0]. This is then dereferenced, this should retrieve a number from a low memory address, 1 to 127 (possibly 255).
What ever number is there is then divided by 4.
I genuinely can't believe my interpretation is correct and can't check that cos I have no laptop. I really think there maybe a typo in your code as it is not a good thing to do. Portable, reusable
I must stress that the string is not converted to a number. Which is then used
I'm working with CodeVisionAVR Evaluation V2.05.0 which use C Compiler Reference.I met a problem when I tried this code:
unsigned int n;
long int data;
data|=(1<<n);
the problem is when n is Greater than 15 the value of data does not change. Although when I try:
data|=(1<<16);
the result is correct.
any help pleas.
In your code, the type of 1 is (as always) int. So, if sizeof (int) is smaller than sizeof (long), it stands to reason that you can't shift an int over all the bits in a long.
The solution is of course to use an (unsigned) long constant:
data |= 1UL << n;
I made it unsigned since unsigned integers are better suited for bitwise operators. The type of the literal 1UL is unsigned long. I find using suffixes nicer than casting, since it's way shorter and part of the literal itself, rather than having a literal of the wrong type and casting it.
Many people seem to expect that in an expression such as:
unsigned long long veryBig = 1 << 35; /* BAD CODE */
the right-hand side should somehow magically adapt to the "needs" of the left-hand side, and thus become of type unsigned long long (which I just assume has at least 35 bits, this is of course not portable but it's concise). That does not happen, C doesn't work like that. The expression is evaluated, then it's converted (if necessary) to the type on the left for the assignment to work.
If sizeof(int)==2 then 1<<n is of type int. And 1<<16 == 0. Thus data is not changed.
Probably because int is 16 bits. When you use integer literals the compiler can work out that the result is larger than fits in an int, and will optimize the whole shift-operation away, but when you use a variable, the compiler can't do that because it doesn't know the value of n.
i want to thank everyone, by your answer I really understood the problem
which is 1 in (1<<n) is integer with size 2 bytes.
i fixed the problem by let 1 be unsigned long (1UL<<n).
this is first question for me and you are awesome guys.
thank you and I'm sorry for my bad english.