How to convert a log2(n) based value to n shifts in a #define statement in C? - c

I have a definition of the following type in C:
#define NUM_OF_CHANNELS 8
I want to refer to this definition and use it also for shift operations, such as
a = b >> 3
The value 3 comes from log2(8) = 3.
So I wish there was something like
#define NUM_OF_CHANNELS_SHIFT LOG2(NUM_OF_CHANNELS)
a = b >> NUM_OF_CHANNELS_SHIFT
But obviously the above definition doesn't work. Is there a nifty way to get this accomplished?

Most commonly, you would just do the defines the other way around:
#define NUM_OF_CHANNELS_SHIFT 3
#define NUM_OF_CHANNELS (1 << NUM_OF_CHANNELS_SHIFT)
This forces you to keep the number of channels a power of two.

Answered by #EricPostpischil in a comment:
If b is known to be nonnegative, simply use b / NUM_OF_CHANNELS. Any
decent compiler will optimize it to a shift.
The compiler will translate the following C code into assembly code performing a 3-bit long right-shift.
#define NUM_OF_CHANNELS 8
a = b / NUM_OF_CHANNELS;

Related

Incorrect multiplication of integers in C

A C-coded S-function in Simulink was showing incorrect behaviour and I have managed to narrow down the problem to an incorrect multiplication of integers.
At the start of the code, I have something like:
#define NRBF 21
#define NRBF1 NRBF+1
Then, in a function in the script I have:
void function_name(SimStruct *S, const int_T a)
{
...
int_T base;
base = a*NRBF1;
printf("%i\t", a);
printf("%i\t", NRBF1);
printf("%i\n", base);
..
}
Now, if a=0, NRBF=21, I have (instead of base=0)
0 22 1
If a=1, NRBF=21, I have (as expected base=22)
1 22 22
If a=2, NRBF=21, I have (instead of base=44)
2 22 43
Now, I must say I am a bit baffled. I tried to change the line of the multiplication to
base = a* (int_T)NRBF1;
but it does not solve the problem.
Any help would be greatly appreciated! Thank you!
The problem is here:
You define your macros like this:
#define NRBF 21
#define NRBF1 NRBF+1
When you write this:
base = a*NRBF1;
The preprocessor replaces NRBF1 textually with 21+1 which results in this:
base = a*21+1;
but you intended this:
base = a*(21+1);
Therefore you need to define your macro like this:
#define NRBF1 (NRBF+1)
With the macro expanded, the line looks like:
base = a*NRBF+1;
For a equal to 0, the expression is 0 * 21 + 1 which is 1.
For a equal to 2, the expression is 2 * 21 + 1 which is 43.
The solution is to put parentheses in the macro definition:
#define NRBF1 (NRBF + 1)
This is a good rule for any macro with an expression as its right-hand side.
Remember that macros are just text-substituted into the code.
It's basically calculating alright
0*21+1 = 1
The macro is expanded but the * has precedence over +. That's why this happens.
A more detail explanation would be
#define NRBF 21
#define NRBF1 NRBF+1
So what is going on
base = a*NRBF1;
or
base = a*NRBF+1
Now when a = 0 then base = 1
when a = 1 then base = 21+1 ...so on.
Correct way would be to wrap it aropund parentheses.
#define NRBF1 (NRBF+1)
Some more pitfalls:
That when you add a macro like this #define SQR(X) X*X
For some example like this where same precedence operators are there next to next then it will be problematic.
int i = 100/SQR(10);
Then it will be expanded to
int i = 100/10*10
Now same precedence operators are here executed left to right.
So it will result in i=100.
Solution same #define SQR(X) (X*X)
Also when passing an expression like this SQR(i+1) then it will be expanded to i+1*i+1=2*i+1. So a bit more correct would be
#define SQR(X) ((X)*(X))
Even with that you wouldn't be able to avoid few things if you forget one thing macro just expands - it does nothing more.
You can't use macro like this
SQR(i++) which will be expanded to ((i++)*(i++)). So you are increasing the i twice which is what you didn't mean. Moreover this will result in undefined behavior.
the define doesn't create a single value 22 but the expression 21 + 1. I wonder whether your problems go away if you change your second #define to
#define NBRF1 (NBRF + 1)

How do you use bitwise operators, masks, to find if a number is a multiple of another number?

So I have been told that this can be done and that bitwise operations and masks can be very useful but I must be missing something in how they work.
I am trying to calculate whether a number, say x, is a multiple of y. If x is a multiple of y great end of story, otherwise I want to increase x to reach the closest multiple of y that is greater than x (so that all of x fits in the result). I have just started learning C and am having difficulty understanding some of these tasks.
Here is what I have tried but when I input numbers such as 5, 9, or 24 I get the following respectively: 0, 4, 4.
if(x&(y-1)){ //if not 0 then multiple of y
x = x&~(y-1) + y;
}
Any explanations, examples of the math that is occurring behind the scenes, are greatly appreciated.
EDIT: So to clarify, I somewhat understand the shifting of bits to get whether an item is a multiple. (As was explained in a reply 10100 is a multiple of 101 as it is just shifted over). If I have the number 16, which is 10000, its complement is 01111. How would I use this complement to see if an item is a multiple of 16? Also can someone give a numerical explanation of the code given above? Showing this may help me understand why it does not work. Once I understand why it does not work I will be able to problem solve on my own I believe.
Why would you even think about using bit-wise operations for this? They certainly have their place but this isn't it.
A better method is to simply use something like:
unsigned multGreaterOrEqual(unsigned x, unsigned y) {
if ((x % y) == 0)
return x;
return (x / y + 1) * y;
}
In the trivial cases, every number that is an even multiple of a power of 2 is just shifted to the left (this doesn't apply when possibly altering the sign bit)
For example
10100
is 4 times
101
and
10100
is 2 time
1010
As for other multiples, they would have to be found by combining the outputs of two shifts. You might want to look up some primitive means of computer division, where division looks roughly like
x = a / b
implemented like
buffer = a
while a is bigger than b; do
yes: subtract a from b
add 1 to x
done
faster routines try to figure out higher level place values first, skipping lots of subtractions. All of these routine can be done bitwise; but it is a big pain. In the ALU these routines are done bitwise. Might want to look up a digital logic design book for more ideas.
Ok, so I have discovered what the error was in my code and since the majority say that it is impossible to calculate whether a number is a multiple of another number using masks I figured I would share what I have learned.
It is possible! - if you are using the correct data types that is.
The code given above works if y is declared as a constant unsigned long as x which was being passed in was also an unsigned long. The key point is not the long or constant part but that the number is unsigned. This sign bit causes miscalculation as the first place in the number indicates sign and when performing bitwise operations signs can get muddled.
So here is my code if we are looking for multiples of 16:
const unsigned long y = 16; //declared globally in my case
Then an unsigned long is passed to the function which runs the following code:
if(x&(y-1)){ //if not 0 then multiple of y
x = x&~(y-1) + y;
}
x will now be the size of the nearest multiple of 16.

Notation for fixed point representation

I'm looking for a commonly understandable notation to define a fixed point number representation.
The notation should be able to define both a power-of-two factor (using fractional bits) and a generic factor (sometimes I'm forced to use this, though less efficient). And also an optional offset should be defined.
I already know some possible notations, but all of them seem to be constrained to specific applications.
For example the Simulink notation would perfectly fit my needs, but it's known only in the Simulink world. Furthermore the overloaded usage of the fixdt() function is not so readable.
TI defines a really compact Q Formats, but the sign is implicit, and it doesn't manage a generic factor (i.e. not a power-of-two).
ASAM uses a generic 6-coefficient rational function with 2nd-degree numerator and denominator polynomials (COMPU_METHOD). Very generic, but not so friendly.
See also the Wikipedia discussion.
The question is only about the notation (not efficiency of the representation nor fixed-point manipulation). So it's a matter of code readability, maintenability and testability.
Ah, yes. Having good naming annotations is absolutely critical to not introducing bugs with fixed point arithmetic. I use an explicit version of the Q notation which handles
any division between M and N by appending _Q<M>_<N> to the name of the variable. This also makes it possible to include the signedness as well. There are no run-time performance penalties for this. Example:
uint8_t length_Q2_6; // unsigned, 2 bit integer, 6 bit fraction
int32_t sensor_calibration_Q10_21; // signed (1 bit), 10 bit integer, 21 bit fraction.
/*
* Calculations with the bc program (with '-l' argument):
*
* sqrt(3)
* 1.73205080756887729352
*
* obase=16
* sqrt(3)
* 1.BB67AE8584CAA73B0
*/
const uint32_t SQRT_3_Q7_25 = 1 << 25 | 0xBB67AE85U >> 7; /* Unsigned shift super important here! */
In case someone have not fully understood why such annotation is extremely important,
Can you spot the if there is an bug in the following two examples?
Example 1:
speed_fraction = fix32_udiv(25, speed_percent << 25, 100 << 25);
squared_speed = fix32_umul(25, speed_fraction, speed_fraction);
tmp1 = fix32_umul(25, squared_speed, SQRT_3);
tmp2 = fix32_umul(12, tmp1 >> (25-12), motor_volt << 12);
Example 2:
speed_fraction_Q7_25 = fix32_udiv(25, speed_percent << 25, 100 << 25);
squared_speed_Q7_25 = fix32_umul(25, speed_fraction_Q7_25, speed_fraction_Q7_25);
tmp1_Q7_25 = fix32_umul(25, squared_speed_Q7_25, SQRT_3_Q1_31);
tmp2_Q20_12 = fix32_umul(12, tmp1_Q7_25 >> (25-12), motor_volt << 12);
Imagine if one file contained #define SQRT_3 (1 << 25 | 0xBB67AE85U >> 7) and another file contained #define SQRT_3 (1 << 31 | 0xBB67AE85U >> 1) and code was moved between those files. For example 1 this has a high chance of going unnoticed and introduce the bug that is present in example 2 which here is done deliberately and has a zero chance of being done accidentally.
Actually Q format is the most used representation in commercial applications: you use is when you need to deal with fractional numbers FAST and your processor does not have a FPU (floating point unit) is it cannot use float and double data types natively - it has to emulate instructions for them which are very expensive.
usually you use Q format to represent only the fractional part, though this not a must, you get more precision for your representation. Here's what you need to consider:
number of bits you use (Q15 uses 16 bitdata types, usually short int)
the first bit is the sign bit (out of 16 bits you are left with 15 for data value)
the rest of the bits are used to store the fractional part of your number.
since you are representing fractional numbers your value is somewhere in [0,1)
you can choose to use some bits for the integer part as well, but you would loose precision - e.g if you wanted to represent 3.3 in Q format, you would need 1 bit for sign, 2 bits for the integer part, and are left with 13 bits for the fractional part (assuming you are using 16 bits representation)-> this format is called 2Q13
Example: Say you want to represent 0.3 in Q15 format; you apply the Rule of Three:
1 = 2^15 = 32768 = 0x8000
0.3 = X
-------------
X = 0.3*32768 = 9830 = 0x666
You lost precision by doing this but at least the computation is fast now.
In C, you can't use a user defined type like a builtin one. If you want to do that, you need to use C++. In that language you can define a class for your fixed point type, overload all the arithmetic operators (+, -, *, /, %, +=, -=, *=, /=, %=, --, ++, cast to other types), so that usage of the instances of this class really behave like the builtin types.
In C, you need to do what you want explicitly. There are two basic approaches.
Approach 1: Do the fixed point adjustments in the user code.
This is overhead-free, but you need to remember to do the correct adjustments. I believe, it is easiest to just add the number of past point bits to the end of the variable name, because the type system won't do you much good, even if you typedef'd all the point positions you use. Here is an example:
int64_t a_7 = (int64_t)(7.3*(1<<7)); //a variable with 7 past point bits
int64_t b_5 = (int64_t)(3.78*(1<<5)); //a variable with 5 past point bits
int64_t sum_7 = a_7 + (b_5 << 2); //to add those two variables, we need to adjust the point position in b
int64_t product_12 = a_7 * b_5; //the product produces a number with 12 past point bits
Of course, this is a lot of hassle, but at least you can easily check at every point whether the point adjustment is correct.
Approach 2: Define a struct for your fixed point numbers and encapsulate the arithmetic on it in a bunch of functions. Like this:
typedef struct FixedPoint {
int64_t data;
uint8_t pointPosition;
} FixedPoint;
FixedPoint fixed_add(FixedPoint a, FixedPoint b) {
if(a.pointPosition >= b.PointPosition) {
return (FixedPoint){
.data = a.data + (b.data << a.pointPosition - b.pointPosition),
.pointPosition = a.pointPosition
};
} else {
return (FixedPoint){
.data = (a.data << b.pointPosition - a.pointPosition) + b.data,
.pointPosition = b.pointPosition
};
}
}
This approach is a bit cleaner in the usage, however, it introduces significant overhead. That overhead consists of:
The function calls.
The copying of the structs for parameter and result passing, or the pointer dereferences if you use pointers.
The need to calculate the point adjustments at runtime.
This is pretty much similar to the overhead of a C++ class without templates. Using templates would move some decisions back to compile time, at the cost of loosing flexibility.
This object based approach is probably the most flexible one, and it allows you to add support for non-binary point positions in a transparent way.

What is '^' operator used in C other than to check if two numbers are equal?

What are the purposes of ^ operator used in C other than to check if two numbers are equal? Also, why is it used for equality in stead of == in the first place?
The ^ operator is the bitwise XOR operator. Although I have never seen it's use for checking equaltity.
x ^ y will evaluate to 0 exatly when x == y.
The XOR operator is used in cryptography (en- and decrypting text using a pseudo-random bit stream), random number generators (like the Mersenne Twister) and in inline-swap and other bit twiddling hacks:
int a = ...;
int b = ...;
// swap a and b
a ^= b;
b ^= a;
a ^= b;
(useful if you don't have space for another variable like on CPUs with few registers).
^ is the Bitwise XOR.
A bitwise operation operates on one or more bit patterns or binary numerals at the level of their individual bits. It is a fast, primitive action directly supported by the processor, and is used to manipulate values for comparisons and calculations. (source: Bitwise Operation)
The XOR Operator has two operands and it returns 1 if only one of the operands is set to 1.
So a Bitwise XOR Operation of two numbers is the resulting of these bit by bit operations.
For exemple:
00000110 // A = 6
00001010 // B = 10
00001100 // A ^ B = 12
^ is a bit-wise XOR operator in C. It can be used in bits toggling and to swap two numbers;
x^=y, y^=x, x^=y;
and can be used to find max of two numbers;
int max(int x, int y)
{
return x ^ ((x ^ y) & -(x < y));
}
It can be used to selectively flip bits. (e.g. to toggle the value of bit #3 in an integer, you can say x = x ^ (1<<3) or, more compactly, x = x^0x08 or even x^=8. (although now that I look at it, the last form looks like some sort of obscene emoticon and should probably be avoided. :)
It should never be used in a test for equality (in C), except in tricky code meant to test undergrads' understanding of the ^ operator. (In assembly, there may be speed advantages on some architectures.)
It it's the exclusive or operator. It will do bitwise exclusive or on the two arguments. If the numbers are equal, this will result in 0, while if they're not equal, the bits that differed between the two arguments will be set.
You generally wouldn't use it inserted of ==, you would use it only when you need to know which bits are different.
Two real usage examples from an embedded system I worked on:
In a status message generating function, where one of the words was supposed to be a passthrough of an external device's status word. There was an disconnect between the device behavior and the message spec - one thought bit0 meant 'error' while the other thought it meant 'OK'.
statuswords[3] = devicestatus ^ 1; //invert B0
The 16-bit target processor was terribly slow to branch, so in an inner loop if (sign(A)!=sign(B) B=0; was coded as:
B*=~(A^B)>>15;
which took 4 cycles rather than 8, and does the same thing: sets B to 0 iff the sign bits are different.
in many general cases we might use '^' as a replacement for'==' but that doesn't exactly give the result for being equal or not.Instead - it checks the given variables bit by bit and sets a result for each bit individually and finally displays a result summed up with the resulting bits as a bulk.

Can the C preprocessor perform integer arithmetic?

As the questions says, is the C preprocessor able to do it?
E.g.:
#define PI 3.1416
#define OP PI/100
#define OP2 PI%100
Is there any way OP and/or OP2 get calculated in the preprocessing phase?
Integer arithmetic? Run the following program to find out:
#include "stdio.h"
int main() {
#if 1 + 1 == 2
printf("1+1==2\n");
#endif
#if 1 + 1 == 3
printf("1+1==3\n");
#endif
}
Answer is "yes", there is a way to make the preprocessor perform integer arithmetic, which is to use it in a preprocessor condition.
Note however that your examples are not integer arithmetic. I just checked, and gcc's preprocessor fails if you try to make it do float comparisons. I haven't checked whether the standard ever allows floating point arithmetic in the preprocessor.
Regular macro expansion does not evaluate integer expressions, it leaves it to the compiler, as can be seen by preprocessing (-E in gcc) the following:
#define ONEPLUSONE (1 + 1)
#if ONEPLUSONE == 2
int i = ONEPLUSONE;
#endif
Result is int i = (1 + 1); (plus probably some stuff to indicate source file names and line numbers and such).
The code you wrote doesn't actually make the preprocessor do any calculation. A #define does simple text replacement, so with this defined:
#define PI 3.1416
#define OP PI/100
This code:
if (OP == x) { ... }
becomes
if (3.1416/100 == x) { ... }
and then it gets compiled. The compiler in turn may choose to take such an expression and calculate it at compile time and produce a code equivalent to this:
if (0.031416 == x) { ... }
But this is the compiler, not the preprocessor.
To answer your question, yes, the preprocessor CAN do some arithmetic. This can be seen when you write something like this:
#if (3.141/100 == 20)
printf("yo");
#elif (3+3 == 6)
printf("hey");
#endif
YES, I mean: it can do arithmetic :)
As demonstrated in 99 bottles of beer.
Yes, it can be done with the Boost Preprocessor. And it is compatible with pure C so you can use it in C programs with C only compilations. Your code involves floating point numbers though, so I think that needs to be done indirectly.
#include <boost/preprocessor/arithmetic/div.hpp>
BOOST_PP_DIV(11, 5) // expands to 2
#define KB 1024
#define HKB BOOST_PP_DIV(A,2)
#define REM(A,B) BOOST_PP_SUB(A, BOOST_PP_MUL(B, BOOST_PP_DIV(A,B)))
#define RKB REM(KB,2)
int div = HKB;
int rem = RKB;
This preprocesses to (check with gcc -S)
int div = 512;
int rem = 0;
Thanks to this thread.
Yes.
I can't believe that no one has yet linked to a certain obfuscated C contest winner. The guy implemented an ALU in the preprocessor via recursive includes. Here is the implementation, and here is something of an explanation.
Now, that said, you don't want to do what that guy did. It's fun and all, but look at the compile times in his hint file (not to mention the fact that the resulting code is unmaintainable). More commonly, people use the pre-processor strictly for text replacement, and evaluation of constant integer arithmetic happens either at compile time or run time.
As others noted however, you can do some arithmetic in #if statements.
Be carefull when doing arithmetic: add parenthesis.
#define SIZE4 4
#define SIZE8 8
#define TOTALSIZE SIZE4 + SIZE8
If you ever use something like:
unsigned int i = TOTALSIZE/4;
and expect i to be 3, you would get 4 + 2 = 6 instead.
Add parenthesis:
#define TOTALSIZE (SIZE4 + SIZE8)

Resources