Negative and oversized bit shifts are undefined behaviour. But I need to do positive/negative shifts in many places of my code, so I wrote a generic function for that:
uint64_t shift_bit(uint64_t b, int i)
{
// detect oversized shift
assert(-63 <= i && i <= 63);
return i >= 0 ? b << i : b >> -i;
}
That takes care of negative shifts. But what about oversized shift?
Is a C compiler allowed to optimize away the assert()? (Or to replace it with assert(true), which has the same effect).
Is there a way to code around it?
PS: Please base your answers only on what the standard guarantees, not on what your specific compiler does. My program needs to be 100% portable, as it gets compiled for many different platforms, and with many different compilers.
assert() is replaced to "nothing" on release.
#ifdef NDEBUG
#define assert(exp) ((void)0)
#else
#define assert(exp) /*implementation*/
#endif
Related
I found that the following code(C Files) can be compiled successfully in x86_64, gcc 10.1.0.
#include <immintrin.h>
#include <stdint.h>
#include <stdio.h>
typedef union{
__m64 x;
#if defined(__arm__) || defined(__aarch64__)
int32x2_t d[1];
#endif
uint8_t i8u[8];
}u_m64;
int main()
{
u_m64 a, b, c;
c.x = a.x + b.x;
return 0;
}
But there are lots of add function for __m64, like "_mm_add_pi16, _mm_hadd_pi16", "_mm_add_si64" and so on(The same applies to __mm128, __mm256...). So which one is called by the operate '+' ? And how can a 'Operator Overloading' be used in a C Files?
Yeah, gcc and clang provide basic operators for builtin SIMD types, which is frankly so beyond stupid that it's not even remotely funny :(
Anyhow, this mechanism isn't working in the same way as operator overloading in C++. What it's actually doing, is promoting __m64 to be a true intrinsic type (such as int/float), meaning the operators are at a language level, rather than overload level. (That's why it works in C).
In this case I would assume it is calling add (rather than horizontal add).
However, we now hit the biggest problem! - The contents of __m64 are NOT known at compile time!
Within any given __m64, we could be storing any permutation of:
8 x int8
4 x int16
2 x int32
8 x uint8
4 x uint16
2 x uint32
For addition (ignoring the saturated variants) that means the addition operator could be calling any one these perfectly valid choices:
_mm_add_pi8
_mm_add_pi16
_mm_add_pi32
I don't know which of those instructions gcc/clang ends up calling in this context, however I do know that it's always going to be the wrong instruction 66.66% of the time :(
I have a strange behavior of a small piece of C code.
I want to store the result of a boolean expression in a variable but it seems not to work.
Following the code:
#define rtCP_Constant_Value_fklq (uint8_t) 1 //Simulink const
#define rtCP_Constant_Value (uint8_t) 0 //Simulink const
uint16_t rtb_tobit;
volatile unsigned char rtb_y;
uint8_t asr_ena_=14;
rtb_tobit = (1 << rtCP_Constant_Value_fklq);
uint8_t temp = ((uint8_t)rtb_tobit) & asr_ena_;
rtb_y = (temp !=(rtCP_Constant_Value));
I have tested this snippet of code with two compilers, Renesas SH 9_4_1 and gcc-arm non-eabi in an Nucleo eval board.
In both of them the variable rtb_y is always zero.
The debugger shows that the expression (temp !=(rtCP_Constant_Value)) is true but I cannot understand why, the variable rtb_y is always equal to zero.
Could someone explain me why? Is this strange behaviour because of C standards, which I used?
It is a really bad idea to use macros in the way you are using them. You need to be very careful, generally, to use parenthesis in the right places. Also you should not include the ; in the macro. For example, this is better:
#define rtCP_Constant_Value_fklq ((uint8_t) 1) //Simulink const
However, it is not really possible to offer any more help for your question than this, because your example will not compile, due to the inclusion of the ;. If you update the question with code that compiles, it may be possible to help further.
I would like to do the following, but the compiler doesn't like it:
unsigned short foo = 1;
// do something with foo
#if sizeof(short) * CHAR_BIT > 16
foo &= 0xffff;
#endif
I know this expression can always be fully evaluated at compile time, but maybe it's only evaluated after the preprocessor does it's thing? Is this possible in ANSI C or do I just have to do the check at run time?
You can't use sizeof in a preprocessor expression. You might want to do something like this instead:
#include <limits.h>
#if SHRT_MAX > 32767
/* do soemthing */
#endif
If your aim is to stop compilation when a data type is the wrong size, the following technique is useful:
struct _check_type_sizes
{
int int_is_4_bytes[(sizeof(int) == 4) ? 1 : -1];
int short_is_2_bytes[(sizeof(short) == 2) ? 1 : -1];
};
(The sizeof() function is interpreted here by the compiler, not the preprocessor.)
The main disadvantage of this method is that the compiler error isn't very obvious. Make sure you write a very clear comment.
As the questions says, is the C preprocessor able to do it?
E.g.:
#define PI 3.1416
#define OP PI/100
#define OP2 PI%100
Is there any way OP and/or OP2 get calculated in the preprocessing phase?
Integer arithmetic? Run the following program to find out:
#include "stdio.h"
int main() {
#if 1 + 1 == 2
printf("1+1==2\n");
#endif
#if 1 + 1 == 3
printf("1+1==3\n");
#endif
}
Answer is "yes", there is a way to make the preprocessor perform integer arithmetic, which is to use it in a preprocessor condition.
Note however that your examples are not integer arithmetic. I just checked, and gcc's preprocessor fails if you try to make it do float comparisons. I haven't checked whether the standard ever allows floating point arithmetic in the preprocessor.
Regular macro expansion does not evaluate integer expressions, it leaves it to the compiler, as can be seen by preprocessing (-E in gcc) the following:
#define ONEPLUSONE (1 + 1)
#if ONEPLUSONE == 2
int i = ONEPLUSONE;
#endif
Result is int i = (1 + 1); (plus probably some stuff to indicate source file names and line numbers and such).
The code you wrote doesn't actually make the preprocessor do any calculation. A #define does simple text replacement, so with this defined:
#define PI 3.1416
#define OP PI/100
This code:
if (OP == x) { ... }
becomes
if (3.1416/100 == x) { ... }
and then it gets compiled. The compiler in turn may choose to take such an expression and calculate it at compile time and produce a code equivalent to this:
if (0.031416 == x) { ... }
But this is the compiler, not the preprocessor.
To answer your question, yes, the preprocessor CAN do some arithmetic. This can be seen when you write something like this:
#if (3.141/100 == 20)
printf("yo");
#elif (3+3 == 6)
printf("hey");
#endif
YES, I mean: it can do arithmetic :)
As demonstrated in 99 bottles of beer.
Yes, it can be done with the Boost Preprocessor. And it is compatible with pure C so you can use it in C programs with C only compilations. Your code involves floating point numbers though, so I think that needs to be done indirectly.
#include <boost/preprocessor/arithmetic/div.hpp>
BOOST_PP_DIV(11, 5) // expands to 2
#define KB 1024
#define HKB BOOST_PP_DIV(A,2)
#define REM(A,B) BOOST_PP_SUB(A, BOOST_PP_MUL(B, BOOST_PP_DIV(A,B)))
#define RKB REM(KB,2)
int div = HKB;
int rem = RKB;
This preprocesses to (check with gcc -S)
int div = 512;
int rem = 0;
Thanks to this thread.
Yes.
I can't believe that no one has yet linked to a certain obfuscated C contest winner. The guy implemented an ALU in the preprocessor via recursive includes. Here is the implementation, and here is something of an explanation.
Now, that said, you don't want to do what that guy did. It's fun and all, but look at the compile times in his hint file (not to mention the fact that the resulting code is unmaintainable). More commonly, people use the pre-processor strictly for text replacement, and evaluation of constant integer arithmetic happens either at compile time or run time.
As others noted however, you can do some arithmetic in #if statements.
Be carefull when doing arithmetic: add parenthesis.
#define SIZE4 4
#define SIZE8 8
#define TOTALSIZE SIZE4 + SIZE8
If you ever use something like:
unsigned int i = TOTALSIZE/4;
and expect i to be 3, you would get 4 + 2 = 6 instead.
Add parenthesis:
#define TOTALSIZE (SIZE4 + SIZE8)
As an exercise, I'd like to write a macro which tells me if an integer variable is signed. This is what I have so far and I get the results I expect if I try this on a char variable with gcc -fsigned-char or -funsigned-char.
#define ISVARSIGNED(V) (V = -1, (V < 0) ? 1 : 0)
Is this portable? Is there a way to do this without destroying the value of the variable?
#define ISVARSIGNED(V) ((V)<0 || (-V)<0 || (V-1)<0)
doesn't change the value of V. The third test handles the case where V == 0.
On my compiler (gcc/cygwin) this works for int and long but not for char or short.
#define ISVARSIGNED(V) ((V)-1<0 || -(V)-1<0)
also does the job in two tests.
If you're using GCC you can use the typeof keyword to not overwrite the value:
#define ISVARSIGNED(V) ({ typeof (V) _V = -1; _V < 0 ? 1 : 0 })
This creates a temporary variable, _V, that has the same type as V.
As for portability, I don't know. It will work on a two's compliment machine (a.k.a. everything your code will ever run on in all probability), and I believe it will work on one's compliment and sign-and-magnitude machines as well. As a side note, if you use typeof, you may want to cast -1 to typeof (V) to make it safer (i.e. less likely to trigger warnings).
#define ISVARSIGNED(V) ((-(V) < 0) != ((V) < 0))
Without destroying the variable's value. But doesn't work for 0 values.
What about:
#define ISVARSIGNED(V) (((V)-(V)-1) < 0)
This simple solution has no side effects, including the benefit of only referring to v once (which is important in a macro). We use the gcc extension "typeof" to get the type of v, and then cast -1 to this type:
#define IS_SIGNED_TYPE(v) ((typeof(v))-1 <= 0)
It's <= rather than just < to avoid compiler warnings for some cases (when enabled).
A different approach to all the "make it negative" answers:
#define ISVARSIGNED(V) (~(V^V)<0)
That way there's no need to have special cases for different values of V, since ∀ V ∈ ℤ, V^V = 0.
A distinguishing characteristic of signed/unsigned math is that when you right shift a signed number, the most significant bit is copied. When you shift an unsigned number, the new bits are 0.
#define HIGH_BIT(n) ((n) & (1 << sizeof(n) * CHAR_BITS - 1))
#define IS_SIGNED(n) (HIGH_BIT(n) ? HIGH_BIT(n >> 1) != 0 : HIGH_BIT(~n >> 1) != 0
So basically, this macro uses a conditional expression to determine whether the high bit of a number is set. If it's not, the macro sets it by bitwise negating the number. We can't do an arithmetic negation because -0 == 0. We then shift right by 1 bit and test whether sign extension occurred.
This assumes 2's complement arithmetic, but that's usually a safe assumption.
Why on earth do you need it to be a macro? Templates are great for this:
template <typename T>
bool is_signed(T) {
static_assert(std::numeric_limits<T>::is_specialized, "Specialize std::numeric_limits<T>");
return std::numeric_limits<T>::is_signed;
}
Which will work out-of-the-box for all fundamental integral types. It will also fail at compile-time on pointers, which the version using only subtraction and comparison probably won't.
EDIT: Oops, the question requires C. Still, templates are the nice way :P