Let uint8 and uint16 be datatypes for 8bit and 16bit positive integers.
uint8 a = 1;
uint16 b = a << 8;
I tested this program on 32Bit architecture with result
b = 256
Would the same programm on a system with registers of 8bit length yield the result:
b = 0 ?
because all bits in register gets shifted to 0 by a << 8?
Registers are irrelevant. This is about the width of your types.
When you shift a value by more bits than it possesses, the behaviour is undefined. The compiler, the program, the computer, the tax office can legally manifest any results accordingly. And, no, that's not just theoretical.
However, operands in C are promoted before interesting things are done on them. So, your uint8_t becomes an int before the left-shift.
Now it depends on your architecture (as determined by your compiler configuration) as to what happens: is int on your implementation only 8-bit? No, it's not! The result, then — regardless of any "register size" — must abide by the rules of the language, yielding the mathematically appropriate answer (256). And, even if it were, you'd hit that undefined behaviour so the question would be moot.
Under the bonnet, if more than one register is needed to hold a variable, then that's what will and must happen (at whatever performance cost is implied as a result). That's if a register is used at all; remember, you're programming in an abstraction, not hand-crafting machine code. The program snippet you showed can be completely optimised away during compilation and doesn't require any runtime instructions at all.
Would the same programm on a system with registers of 8bit length the result be b=0?
No.
In the expression a << 8 the variable a will get promoted to an int before the bit shift. And an int is guaranteed to be at least 16 bits.
b will have the value 256 on all platforms unless there's a bug in the compiler.
However, if you changed the second line to uint32 b = a << 16; you might get strange results. a would still get promoted to an int, but if int is two bytes long, then a << 16 will invoke undefined behavior.
Related
What's more recommended or advisable way to convert array of uint8_t at offset i to uint64_t and why?
uint8_t * bytes = ...
uint64_t const v = ((uint64_t *)(bytes + i))[0];
or
uint64_t const v = ((uint64_t)(bytes[i+7]) << 56)
| ((uint64_t)(bytes[i+6]) << 48)
| ((uint64_t)(bytes[i+5]) << 40)
| ((uint64_t)(bytes[i+4]) << 32)
| ((uint64_t)(bytes[i+3]) << 24)
| ((uint64_t)(bytes[i+2]) << 16)
| ((uint64_t)(bytes[i+1]) << 8)
| ((uint64_t)(bytes[i]));
There are two primary differences.
One, the behavior of ((uint64_t *)(bytes + i))[0] is not defined by the C standard (unless certain prerequisites about what bytes point to are met). Generally, an array of bytes should not be accessed using a uint64_t type.
When memory defined as one type is accessed with another type, it is called aliasing, and the C standard only defines certain combinations of aliasing. Some compilers may support some aliasing beyond what the standard requires, but using it is not portable. Additionally, if bytes + i is not suitably aligned for a uint64_t, the access may cause an exception or otherwise malfunction.
Two, loading the bytes through aliasing, if it is defined (by the standard or by compiler extension), interprets the bytes using the memory ordering for the C implementation. Some C implementations store the bytes representing integers in memory from low address to high address for low-position-value bytes to high-position-value bytes, and some store them from high address to low address. (And they can be stored in non-consecutive orders too, although this is rare.) So loading the bytes this way will produce different values from the same bytes in memory based on what order the C implementation uses.
But loading the bytes and using shifts to combine them will always produce the same value from the same bytes in memory regardless of what order the C implementation uses.
The first method should be avoided, because there is no need for it. If one desires to interpret the bytes using the C implementation’s ordering, this can be done with:
uint64_t t;
memcpy(&t, bytes+i, sizeof t);
const uint64_t v = t;
Using memcpy provides a portable way of aliasing the uint64_t to store bytes into it. Good compilers recognize this idiom and will optimize the memcpy to a load from memory, if suitable for the target architecture (and if optimization is enabled).
If one desires to interpret the bytes using little-endian ordering, as shown in the code in the question, then the second method may be used. (Sometimes platforms will have routines that may provide more efficient code for this.)
You can also use memcpy
uint64_t n;
memcpy(&n, bytes + i, sizeof(uint64_t));
const uint64_t v = n;
The first option has two big problems that qualify as undefined behavior (anything can happen):
A uint8_t* or array of uint8_t is not necessarily aligned the same way as required by a larger type like uint64_t. Simply casting to uint64_t* leads to misaligned access. This can cause hardware exceptions, program crashes, slower code etc, all depending on the alignment requirements of the specific target.
It violates the internal type system of C, where each object in memory known by the compiler has an "effective type" that the compiler keeps track of. Based on this, the compiler is allowed to make certain assumptions regarding if a certain memory region have been accessed or not during optimization. If your code violates these type rules, as it would in this case, wrong machine code could get generated.
This is most commonly referred to as the strict aliasing rule and your cast followed by dereferencing would be a so-called "strict aliasing violation".
The second option is sound code, because:
When doing shifts or other forms of bitwise arithmetic, a large integer type should be used. That is, unsigned int or larger - depending on system. Using signed types or small integer types can lead to undefined behavior or unexpected results. See Implicit type promotion rules regarding problems with small integer types implicitly changing signedness in some expressions.
If not for the cast to uint64_t, then the bytes[i+7] << 56 shift would involve an implicit promotion of the left operand from uint8_t to int, which would be a bug. Because if the most significant bit (MSB) of the byte is set and we shift into/beyond the sign bit, we invoke undefined behavior - again, anything can happen.
And naturally we need to use a 64 bit type in this specific case or otherwise we wouldn't be able to shift as far as 56 bits. Shifting beyond the range of the type of the left operand is also undefined behavior.
Note that whether to pick the order of bytes[i+7] << 56 versus the alternative bytes[i+0] << 56 depends on the underlying CPU endianess. Bit shifts are nice since the actual shift ignores if the destination type is using big or little endian. But in this case you must know in advance which byte in the source array you want to correspond to the most significant. This code you have here will work if the array was built based on little endian formatting, since the last byte of the array is shifted to the highest address.
As for the uint64_t const v = , the const qualifier is a bit strange to have at local scope like that. It's harmless but confusing and doesn't really add anything of value inside a local scope. I would just drop it.
When writing to a GPIO output register (BCM2711 chip on the Raspberry Pi 4), the location of my clear_register function call changes the result completely. The code is below.
void clear_register(long reg) {
*(unsigned int*)reg = (unsigned int)0;
}
int set_pin(unsigned int pin_number) {
long reg = SET_BASE; //0xFE200001c
clear_register(SET_BASE);
unsigned int curval = *(unsigned int*)reg;
unsigned int value = (1 << pin_number);
value |= curval;
gpio_write(reg, value);
return 1;
}
When written like this, the register gets cleared. However, if I call clear_register(SET_BASE) before long reg = SET_BASE, the registers don't get cleared. I can't tell why this would make a difference; any help is appreciated.
I can't reproduce it since I don't have the target hardware which allows writing to that address but you have several bugs and dangerous style issues:
0xF E2 00 00 1c (64 bit signed) does not necessarily fit inside a long (possibly 32 bit) and certainly does not fit inside a unsigned int (16 or 32 bits).
Even if that was a typo and you meant to write 0xFE20001c (32 bit), your program still stuffers from "sloppy typing", which is when the programmer just type out long, int etc without given it any deeper thought. The outcome is strange, subtle and intermittent bugs. I think everyone can relate to the situation "just cast it to unsigned int and it works" but you have no idea why. Bugs like that are always caused by "sloppy typing".
You should be using the stdint.h types instead. uint32_t and so on.
Also related to "sloppy typing", there exist very few cases where you want to actually use signed numbers in embedded systems - certainly not when dealing with addresses and bits. Negative addresses in Linux is a thing - kernel space - but accidentally changing an address to a negative value is a bug.
Signed types will only cause problems when doing bitwise arithmetic and they come with overflows. So you need to nuke the presence of every signed variable in your code unless it explicitly needs to be signed.
This means no sloppy long but the correct type, which is either uint32_t or uintptr_t if it should hold an address, as is the case in your example. It also mean so sloppy hex constants, they must always have u suffix: 0xFE20001Cu or they easily boil down to the wrong type.
For these reasons 1 << is always a bug in a C program, because 1 is of type signed int and you may end up shifting into the sign bit of that signed int, which is undefined behavior. There exists no reason why you shouldn't write 1u << , always.
Whenever you are dealing with hardware registers, you must always use volatile qualified pointers or you might end up with very strange bugs caused by the optimizer. No exceptions here either.
(unsigned int)0 is a pointless cast. If you wish to be explict you could write 0u but that doesn't change anything in case of assignment, since a "lvalue conversion" to the type of the left operand always happens upon assignment. 0u does make some static analysers happy though, MISRA C checkers etc.
With all bug fixes, your clear function should for example look something like this:
void clear_register (uintptr_t reg) {
*(volatile uint32_t*)reg = 0;
}
Or better yet, don't write functions for such very basic things! *(volatile uint32_t*)SET_BASE = 0; is perfectly clear, readable and self-documented code. While clear_register(SET_BASE) suggests that something more advanced than just a simple write is taking place.
However, you could have placed the cast inside the SET_BASE macro. For details and examples see How to access a hardware register from firmware?
I have a uint8_t that should contain the result of a bitwise calculation. The debugger says the variable is set correctly, but when i check the memory, the var is always at 0. The code proceeds like the var is 0, no matter what the debugger tells me. Here's the code:
temp = (path_table & (1 << current_bit)) >> current_bit;
//temp is always 0, debugger shows correct value
if (temp > 0) {
DS18B20_send_bit(pin, 0x01);
} else {
DS18B20_send_bit(pin, 0x00);
}
Temp's a uint8_t, path_table's a uint64_t and current_bit's a uint8_t. I've tried to make them all uint64_t but nothing changed. I've also tried using unsigned long long int instead. Nothing again.
The code always enters the else clause.
Chip's Atmega4809, and uses uint64_t in other parts of the code with no issues.
Note - If anyone knows a more efficient/compact way to extract a single bit from a variable i would really appreciate if you could share ^^
1 is an integer constant, of type int. The expression 1 << current_bit also has type int, but for 16-bit int, the result of that expression is undefined when current_bit is larger than 14. The behavior being undefined in your case, then, it is plausible that your debugger presents results for the overall expression that seem inconsistent with the observed behavior. If you used an unsigned int constant instead, i.e. 1u, then the resulting value of temp would be well defined as 0 whenever current_bit was greater than 15, because the result of the left shift would be zero.
Solve this problem by performing the computation in a type wide enough to hold the result. Here's a compact, correct, and pretty clear way to correct your code to do that:
DS18B20_send_bit(pin, (path_table & (((uint64_t) 1) << current_bit)) != 0);
Or if path_table has an unsigned type then I prefer this, though it's more of a departure from your original:
DS18B20_send_bit(pin, (path_table >> current_bit) & 1);
Realization #1 here is that AVR is 1980-1990s technology core. It is not a x64 PC that chews 64 bit numbers for breakfast, but an extremely inefficient 8-bit MCU. As such:
It likes 8 bit arithmetic.
It will struggle with 16 bit arithmetic, by doing tricks with 16 bit index registers, double accumulators or whatever 8 bit core tricks it prefers to do.
It will literally take ages to execute 32 bit arithmetic, by invoking software libraries inline.
It will probably melt through the floor if attempting 64 bit arithmetic.
Before you do anything else, you need to get rid of all 64 bit arithmetic and radically minimize the use of 32 bit arithmetic. Period. There should be no single variable of uint64_t in your code or you are doing it very very wrong.
With this revelation also comes that all 8 bit MCUs always have an int type which is 16 bits.
In the code 1<<current_bit, the integer constant 1 is of type int. Meaning that if current_bit is 15 or larger, you will shift bits into the sign bit of this temporary int. This is always a bug. Strictly speaking this is undefined behavior. In practice, you might end up with random change of sign of your numbers.
To avoid this, never use any form of bitwise operators on signed numbers. When mixing integer constants such as 1 with bitwise operators, change them to 1u to avoid bugs like the one mentioned.
If anyone knows a more efficient/compact way to extract a single bit from a variable i would really appreciate if you could share
The most efficient way in C is: uint8_t variable; ... if(variable & (1u << bits)). This should translate to the relevant "branch if bit set" instruction.
My general advise would be find your tool chain's disassembler and see what machine code that the C code actually generated. You don't have to be an assembler guru to read it, peeking at the instruction set should be enough.
There is some debate between my colleague and I about the U suffix after hexadecimally represented literals. Note, this is not a question about the meaning of this suffix or about what it does. I have found several of those topics here, but I have not found an answer to my question.
Some background information:
We're trying to come to a set of rules that we both agree on, to use that as our style from that point on. We have a copy of the 2004 Misra C rules and decided to use that as a starting point. We're not interested in being fully Misra C compliant; we're cherry picking the rules that we think will most increase efficiency and robustness.
Rule 10.6 from the aforementioned guidelines states:
A “U” suffix shall be applied to all constants of unsigned type.
I personally think this is a good rule. It takes little effort, looks better than explicit casts and more explicitly shows the intention of a constant. To me it makes sense to use it for all unsigned contants, not just numerics, since enforcing a rule doesn't happen by allowing exceptions, especially for a commonly used representation of constants.
My colleague, however, feels that the hexadecimal representation doesn't need the suffix. Mostly because we almost exclusively use it to set micro-controller registers, and signedness doesn't matter when setting registers to hex constants.
My Question
My question is not one about who is right or wrong. It is about finding out whether there are cases where the absence or presence of the suffix changes the outcome of an operation. Are there any such cases, or is it a matter of consistency?
Edit: for clarification; Specifically about setting micro-controller registers by assigning hexadecimal values to them. Would there be a case where the suffix could make a difference there? I feel like it wouldn't. As an example, the Freescale Processor Expert generates all register assignments as unsigned.
Appending a U suffix to all hexadecimal constants makes them unsigned as you already mentioned. This may have undesirable side-effects when these constants are used in operations along with signed values, especially comparisons.
Here is a pathological example:
#define MY_INT_MAX 0x7FFFFFFFU // blindly applying the rule
if (-1 < MY_INT_MAX) {
printf("OK\n");
} else {
printf("OOPS!\n");
}
The C rules for signed/unsigned conversions are precisely specified, but somewhat counter-intuitive so the above code will indeed print OOPS.
The MISRA-C rule is precise as it states A “U” suffix shall be applied to all constants of unsigned type. The word unsigned has far reaching consequences and indeed most constants should not really be considered unsigned.
Furthermore, the C Standard makes a subtile difference between decimal and hexadecimal constants:
A hexadecimal constant is considered unsigned if its value can be represented by the unsigned integer type and not the signed integer type of the same size for types int and larger.
This means that on 32-bit 2's complement systems, 2147483648 is a long or a long long whereas 0x80000000 is an unsigned int. Appending a U suffix may make this more explicit in this case but the real precaution to avoid potential problems is to mandate the compiler to reject signed/unsigned comparisons altogether: gcc -Wall -Wextra -Werror or clang -Weverything -Werror are life savers.
Here is how bad it can get:
if (-1 < 0x8000) {
printf("OK\n");
} else {
printf("OOPS!\n");
}
The above code should print OK on 32-bit systems and OOPS on 16-bit systems. To make things even worse, it is still quite common to see embedded projects use obsolete compilers which do not even implement the Standard semantics for this issue.
For your specific question, the defined values for micro-processor registers used specifically to set them via assignment (assuming these registers are memory-mapped), need not have the U suffix at all. The register lvalue should have an unsigned type and the hex value will be signed or unsigned depending on its value, but the operation will proceed the same. The opcode for setting a signed number or an unsigned number is the same on your target architecture and on any architectures I have ever seen.
With all integer-constants
Appending u/U insures the integer-constant will be some unsigned type.
Without a u/U
For a decimal-constant, the integer-constant will be some signed type.
For a hexadecimal/octal-constant, the integer-constant will be signed or unsigned type, depending of value and integer type ranges.
Note: All integer-constants have positive values.
// +-------- unary operator
// |+-+----- integer-constant
int x = -123;
absence or presence of the suffix changes the outcome of an operation?
When is this important?
With various expressions, the sign-ness and width of the math needs to be controlled and preferable not surprising.
// Examples: assume 32-bit `unsigned`, `long`, 64-bit `long long`
// Bad signed int overflow (UB)
unsigned a = 4000 * 1000 * 1000;
// OK
unsigned b = 4000u * 1000 * 1000;
// undefined behavior
unsigned c = 1 << 31
// OK
unsigned d = 1u << 31
printf("Size %zu\n", sizeof(0xFFFFFFFF)); // 8 type is `long long`
printf("Size %zu\n", sizeof(0xFFFFFFFFu)); // 4 type is `unsigned`
// 2 ** 63
long long e = -9223372036854775808; // C99: bad "9223372036854775808" not representable
long long f = -9223372036854775807 - 1; // ok
long long g = -9223372036854775808u; // implementation defined behavior **
some_unsigned_type h_max = -1; OK, max value for the target type.
some_unsigned_type i_max = -1u; OK, but not max value for wide unsigned types
// when negating a negative `int`
unsigned j = 0 - INT_MIN; // typically int overflow or UB
unsigned k = 0u - INT_MIN; // Never UB
** or an implementation-defined signal is raised.
For the specific question, which was loading register(s), then the U makes it an unsigned value, but whether the compiler treats the n-bit word pattern as a signed or unsigned value it will move the same bit pattern, assuming there isn't any size extension that would propagate an MSB. The difference that might matter is if the register load operation will set any processor condition flags based on a signed or unsigned loading. As an overall guide if the processor supports storing a constant to configuration register or a memory address then loading a peripheral register is unlikely to set the processor's NEG condition flag. Loading a general purpose register connected to an ALU, one that can be the target of an arithmetic operation like add increment or decrement, might set a negative flag on loading so that e.g. a trailing "branch (if) negative" opcode would execute the branch. You would want to check the processor's references to be sure. Small instruction set processors tend to have only a load register instruction, while larger instruction sets are more likely to have a load unsigned variant of the load instruction that doesn't set the NEG bit in the processor's flags, but again, check the processor's references. if you don't have access to the processor's errata (the boo-boo list) and need a specific flag state. All of this only tends to come up when an optimizing compiler re-aranges code with an inline assembly instruction and other uncommon situations. Examine the generate assembly code, turn off some or all compiler optimizations for the module when needed, etc.
When I write the following program and use the GNU C++ compiler, the output is 1 which I think is due to the rotation operation performed by the compiler.
#include <iostream>
int main()
{
int a = 1;
std::cout << (a << 32) << std::endl;
return 0;
}
But logically, as it's said that the bits are lost if they overflow the bit width, the output should be 0. What is happening?
The code is on ideone, http://ideone.com/VPTwj.
This is caused due to a combination of an undefined behaviour in C and the fact that code generated for IA-32 processors has a 5 bit mask applied on the shift count. This means that on IA-32 processors, the range of a shift count is 0-31 only. 1
From The C programming language 2
The result is undefined if the right operand is negative, or greater than or equal to the number of bits in the left expression’s type.
From IA-32 Intel Architecture Software Developer’s Manual 3
The 8086 does not mask the shift count. However, all other IA-32 processors (starting with the Intel 286 processor) do mask the shift count to 5 bits, resulting in a maximum count of 31. This masking is done in all operating modes (including the virtual-8086 mode) to reduce the maximum execution time of the instructions.
1 http://codeyarns.com/2004/12/20/c-shift-operator-mayhem/
2 A7.8 Shift Operators, Appendix A. Reference Manual, The C Programming Language
3 SAL/SAR/SHL/SHR – Shift, Chapter 4. Instruction Set Reference, IA-32 Intel Architecture Software Developer’s Manual
In C++, shift is only well-defined if you shift a value less steps than the size of the type. If int is 32 bits, then only 0 to, and including, 31 steps is well-defined.
So, why is this?
If you take a look at the underlying hardware that performs the shift, if it only has to look at the lower five bits of a value (in the 32 bit case), it can be implemented using less logical gates than if it has to inspect every bit of the value.
Answer to question in comment
C and C++ are designed to run as fast as possible, on any available hardware. Today, the generated code is simply a ''shift'' instruction, regardless how the underlying hardware handles values outside the specified range. If the languages would have specified how shift should behave, the generated could would have to check that the shift count is in range before performing the shift. Typically, this would yield three instructions (compare, branch, shift). (Admittedly, in this case it would not be necessary as the shift count is known.)
It's undefined behaviour according to the C++ standard:
The value of E1 << E2 is E1
left-shifted E2 bit positions; vacated
bits are zero-filled. If E1 has an
unsigned type, the value of the result
is E1 × 2^E2, reduced modulo one more
than the maximum value representable
in the result type. Otherwise, if E1
has a signed type and non-negative
value, and E1×2^E2 is representable in
the result type, then that is the
resulting value; otherwise, the
behavior is undefined.
The answers of Lindydancer and 6502 explain why (on some machines) it happens to be a 1 that is being printed (although the behavior of the operation is undefined). I am adding the details in case they aren't obvious.
I am assuming that (like me) you are running the program on an Intel processor. GCC generates these assembly instructions for the shift operation:
movl $32, %ecx
sall %cl, %eax
On the topic of sall and other shift operations, page 624 in the Instruction Set Reference Manual says:
The 8086 does not mask the shift count. However, all other Intel Architecture processors
(starting with the Intel 286 processor) do mask the shift count to five bits, resulting in a
maximum count of 31. This masking is done in all operating modes (including the virtual-8086
mode) to reduce the maximum execution time of the instructions.
Since the lower 5 bits of 32 are zero, then 1 << 32 is equivalent to 1 << 0, which is 1.
Experimenting with larger numbers, we would predict that
cout << (a << 32) << " " << (a << 33) << " " << (a << 34) << "\n";
would print 1 2 4, and indeed that is what is happening on my machine.
It doesn't work as expected because you're expecting too much.
In the case of x86 the hardware doesn't care about shift operations where the counter is bigger than the size of the register (see for example SHL instruction description on x86 reference documentation for an explanation).
The C++ standard didn't want to impose an extra cost by telling what to do in these cases because generated code would have been forced to add extra checks and logic for every parametric shift.
With this freedom implementers of compilers can generate just one assembly instruction without any test or branch.
A more "useful" and "logical" approach would have been for example to have (x << y) equivalent to (x >> -y) and also the handling of high counters with a logical and consistent behavior.
However this would have required a much slower handling for bit shifting so the choice was to do what the hardware does, leaving to the programmers the need to write their own functions for side cases.
Given that different hardware does different things in these cases what the standard says is basically "Whatever happens when you do strange things just don't blame C++, it's your fault" translated in legalese.
Shifting a 32 bit variable by 32 or more bits is undefined behavior and may cause the compiler to make daemons fly out of your nose.
Seriously, most of the time the output will be 0 (if int is 32 bits or less) since you're shifting the 1 until it drops off again and nothing but 0 is left. But the compiler may optimize it to do whatever it likes.
See the excellent LLVM blog entry What Every C Programmer Should Know About Undefined Behavior, a must-read for every C developer.
Since you are bit shifting an int by 32 bits; you'll get: warning C4293: '<<' : shift count negative or too big, undefined behavior in VS. This means that you're shifting beyond the integer and the answer could be ANYTHING, because it is undefined behavior.
You could try the following. This actually gives the output as 0 after 32 left shifts.
#include<iostream>
#include<cstdio>
using namespace std;
int main()
{
int a = 1;
a <<= 31;
cout << (a <<= 1);
return 0;
}
I had the same problem and this worked for me:
f = ((long long)1 << (i-1));
Where i can be any integer bigger than 32 bits. The 1 has to be a 64 bit integer for the shifting to work.
Try using 1LL << 60. Here LL is for long long. You can shift now to max of 61 bits.