What is the proper way to perform a divide and round to lower number for a macro?
I am trying to do this:
#define TOTAL_NUM_FFTS (int) NO_SAMPLES / FFT_SIZE
but I am getting a warning of incompaitible redefinition of that macro and the compiler restates the line as:
#define TOTAL_NUM_FFTS(int) NO_SAMPLES / FFT_SIZE without the space between TOTAL_NUM_FFTS and (int).
Thanks for your help!
#define TOTAL_NUM_FFTS ((int) (NO_SAMPLES) / (FFT_SIZE))
The preprocessor thinks (int) is a parameter to the macro.
When defining macros, use as many parentheses as you can. For example, think what will happen if someone defines FFT_SIZE as 2+3. Instead of dividing by 5, you'd be dividing by 2 and then adding 3.
Several things to check:
always properly parenthesize your macros (and macro arguments), as Ilya mentioned
make sure there isn't a duplicate (or near duplicate) definition of the macro somewhere else. The error message should tell you exactly where, but if it doesn't, grep or similar will help (maybe there's an older version of your header hiding in some other directory f the include path?).
make sure your header file is protected against multiple inclusion with include guards. I don't think this is what's happening to you since since identical macro redefinition is supposed to be accepted by a C or C++ compiler, but you should still make sure your header has this.
For dividing, you can use the pow(base, exponent) function available in math.h library.
I suppose you have to implement something like,
#define C A/B //B!=0
then instead of it use the following:
#define C A*pow(B,-1)
Related
When I compile my c project with make command, there is the error
Error: #47-D: incompatible redefinition of macro "MACRO_NAME"
It looks like MACRO_NAME is already defined in one of the header files, but I want to redefine or hardcode new value for MACRO_NAME.
How to remove this error?
There is no clean way to define this macro so that it realiably and predictably is not used in the meaning of one of the definitions, when the other one is meant.
If you use a mechanism with undef, then you run the risk to undefine the other meaning, define it to your meaning and then end up with code which expects the other meaning seeing and using your meaning.
The only way to achieve reliability and predictability is to make sure that code which expects one meaning does not include (neither directly nor indirectly) the header which defines the other meaning.
You can do so by
a) defining in a way that neither definition can be done when the other one is already defined. To do so, in both cases
#ifdef MACRO_NAME
#error Separation of the two meanings of MACRO_NAME failed!
/* the other definition of MACRO_NAME is alreay visible */
#endif
#define MACRO_NAME MyMeaning
b) make sure that no code includes both definitions
Actually a) is only a technical help to make sure b). If you say that you will not ever include both definitions into one code file, then you have no problem. In that case, you do not get the #error from a). In that case you do not have a problem with using the wrong definition. Good. How do you know? How can you be sure that you do not have the problem, even if you change code, even if your colleague changes code? Use a), then you will be clearly told when you get caught by the redefinition trap. If you use #undef instead, then you have not prevented the problem, just hidden it and made it harder to debug.
c) in the case that you can only influence one of the two definitions, i.e. the other one is by another supplier, the best way is to change the name of your own definition. Whatever effort that causes in your code, it will be less than getting caught be unintended redefinition problems.
d) in the case that you cannot influence any of the two definitions (which is of course NOT the case you are asking about) you have to separate the code files into two groups, those who use one definition and not the other and the group which uses the other definition and only that one.
Use #undef to redefine MACRO in the header file when it is needed
#ifdef MACRO_NAME
#undef MACRO_NAME
#endif
#define MACRO_NAME 100
I am trying to implement the standard xor swap algorithm as a C macro.
I have two versions of the macro. One that doesn't worry about the types and one that attempts to cast everything to an integer.
Here are the macro's
#define XOR_SWAP(a,b) ((a)^=(b),(b)^=(a),(a)^=(b))
#define LVALUE_CAST(type,value) (*((type)*)&(value))
#define XOR_CAST_SWAP(type,a,b) (LVALUE_CAST((type),(a))=(type)(a)^(type)(b),LVALUE_CAST((type),(b))=(type)(b)^(type)(a),LVALUE_CAST((type),(a))=(type)(a)^(type)(b))
I know it's a pain to read the one with a cast, but your efforts are appreciated.
The error that I'm getting is:
some_file.c(260,3): expected expression before ')' token
Now, I'm looking at it but I still can't figure out where my problem lies.
I've even used the -save-temps option to capture the preprocessor output and the line looks like this:
((*(((intptr_t))*)&((Block1)))=(intptr_t)(Block1)^(intptr_t)(Block2),(*(((intptr_t))*)&((Block2)))=(intptr_t)(Block2)^(intptr_t)(Block1),(*(((intptr_t))*)&((Block1)))=(intptr_t)(Block1)^(intptr_t)(Block2));
Before anybody mentions it, I've since realized that I should probably make this a function instead of a macro. Or even better, just use that extra variable to do the swap, it isn't hard.
But I want to know why this macro doesn't work. The brackets seem to match exactly as I wanted them to, so why is it complaining?
The LVALUE_CAST is something I took from #Jens Gustedt's answer in this SO question.
Update:
The macro call that produces that preprocessor output looks like this:
XOR_CAST_SWAP(intptr_t, Block1, Block2);
I don't believe you can wrap types in arbitrary levels of parentheses.* So this compiles fine:
((*(intptr_t*)&((Block1)))=(intptr_t)(Block1)^(intptr_t)(Block2),(*(intptr_t*)&((Block2)))=(intptr_t)(Block2)^(intptr_t)(Block1),(*(intptr_t*)&((Block1)))=(intptr_t)(Block1)^(intptr_t)(Block2));
* Disclaimer: this is purely empirical! I don't intend to peruse the standard to figure out what the details are...
I have some experience in programming in C but I would not dare to call myself proficient.
Recently, I encountered the following macro:
#define CONST(x) (x)
I find it typically used in expressions like for instance:
double x, y;
x = CONST(2.0)*y;
Completely baffled by the point of this macro, I extensively researched the advantages/disadvantages and properties of macros but still I can not figure out what the use of this particular macro would be. Am I missing something?
As presented in the question, you are right that the macro does nothing.
This looks like some artificial structure imposed by whoever wrote that code, maybe to make it abundantly clear where the constants are, and be able to search for them? I could see the advantage in having searchable constants, but this is not the best way to achieve that goal.
It's also possible that this was part of some other macro scheme that either never got implemented or was only partially removed.
Some (old) C compilers do not support the const keyword and this macro is most probably a reminiscence of a more elaborate sequence of macros that handled different compilers. Used like in x = CONST(2.0)*y; though makes no sense.
You can check this section from the Autoconf documentation for more details.
EDIT: Another purpose of this macro might be custom preprocessing (for extracting and/or replacing certain constants for example), like Qt Framework's Meta Object Compiler does.
There is absolutely no benefit of that macro and whoever wrote it must be confused. The code is completely equivalent to x = 2.0*y;.
Well this kind of macro could actually be usefull when there is a need to workaround the macro expansion.
A typical example of such need is the stringification macro. Refer to the following question for an example : C Preprocessor, Stringify the result of a macro
Now in your specific case, I don't see the benefit appart from extreme documention or code parsing purposes.
Another use could be to reserve those values as future function invocations, something like this:
/* #define CONST(x) (x) */
#define CONST(x) some_function(x)
// ...
double x, y;
x = CONST(2.0)*y; // x = some_function(2.0)*y;
Another good thing about this macro would be something like this
result=CONST(number+number)*2;
or something related to comparisons
result=CONST(number>0)*2;
If there is some problem with this macro, it is probably the name. This "CONST" thing isn't related with constants but with some other thing. It would be nice to look for the rest of the code to know why the author called it CONST.
This macro does have the effect of wrapping parenthesis around x during the macro expansion.
I'm guessing someone is trying to allow for something along the lines of
CONST(3+2)*y
which, without the parens, would become
3+2*y
but with the parens becomes
(3+2)*y
I seem to recall that we had the need for something like this in a previous development lifetime.
I wrote the following macro:
#define m[a,b] m.values[m.rows*(a)+(b)]
However gcc gives me this error:
error: missing whitespace after the macro name
What is wrong and how do I fix it?
You cannot use [ and ] as delimiters for macro arguments; you must use ( and ). Try this:
#define m(a,b) m.values[m.rows*(a)+(b)]
But note that defining the name of a macro as the name of an existing variable may be confusing. You should avoid shadowing names like this.
I'm not familiar with any C preprocessor syntax that uses square brackets. Change
#define m[a,b] m.values[m.rows*(a)+(b)]
to
#define m(a,b) m.values[m.rows*(a)+(b)]
And it should work.
You cannot have such a macro that will expand when you supply arguments in square brackets. Wherever you got the idea that macros are a smart text-substituting tool, it's just the other way round: macros are extremely obtuse and stupid text-substitution mechanism. What you're trying to do with a macro is absolutely unwarranted - just write a named function.
In a previous question what I thought was a good answer was voted down for the suggested use of macros
#define radian2degree(a) (a * 57.295779513082)
#define degree2radian(a) (a * 0.017453292519)
instead of inline functions. Please excuse the newbie question, but what is so evil about macros in this case?
Most of the other answers discuss why macros are evil including how your example has a common macro use flaw. Here's Stroustrup's take: http://www.research.att.com/~bs/bs_faq2.html#macro
But your question was asking what macros are still good for. There are some things where macros are better than inline functions, and that's where you're doing things that simply can't be done with inline functions, such as:
token pasting
dealing with line numbers or such (as for creating error messages in assert())
dealing with things that aren't expressions (for example how many implementations of offsetof() use using a type name to create a cast operation)
the macro to get a count of array elements (can't do it with a function, as the array name decays to a pointer too easily)
creating 'type polymorphic' function-like things in C where templates aren't available
But with a language that has inline functions, the more common uses of macros shouldn't be necessary. I'm even reluctant to use macros when I'm dealing with a C compiler that doesn't support inline functions. And I try not to use them to create type-agnostic functions if at all possible (creating several functions with a type indicator as a part of the name instead).
I've also moved to using enums for named numeric constants instead of #define.
There's a couple of strictly evil things about macros.
They're text processing, and aren't scoped. If you #define foo 1, then any subsequent use of foo as an identifier will fail. This can lead to odd compilation errors and hard-to-find runtime bugs.
They don't take arguments in the normal sense. You can write a function that will take two int values and return the maximum, because the arguments will be evaluated once and the values used thereafter. You can't write a macro to do that, because it will evaluate at least one argument twice, and fail with something like max(x++, --y).
There's also common pitfalls. It's hard to get multiple statements right in them, and they require a lot of possibly superfluous parentheses.
In your case, you need parentheses:
#define radian2degree(a) (a * 57.295779513082)
needs to be
#define radian2degree(a) ((a) * 57.295779513082)
and you're still stepping on anybody who writes a function radian2degree in some inner scope, confident that that definition will work in its own scope.
For this specific macro, if I use it as follows:
int x=1;
x = radian2degree(x);
float y=1;
y = radian2degree(y);
there would be no type checking, and x,y will contain different values.
Furthermore, the following code
float x=1, y=2;
float z = radian2degree(x+y);
will not do what you think, since it will translate to
float z = x+y*0.017453292519;
instead of
float z = (x+y)+0.017453292519;
which is the expected result.
These are just a few examples for the misbehavior ans misuse macros might have.
Edit
you can see additional discussions about this here
if possible, always use inline function. These are typesafe and can not be easily redefined.
defines can be redfined undefined, and there is no type checking.
Macros are relatively often abused and one can easily make mistakes using them as shown by your example. Take the expression radian2degree(1 + 1):
with the macro it will expand to 1 + 1 * 57.29... = 58.29...
with a function it will be what you want it to be, namely (1 + 1) * 57.29... = ...
More generally, macros are evil because they look like functions so they trick you into using them just like functions but they have subtle rules of their own. In this case, the correct way would be to write it would be (notice the paranthesis around a):
#define radian2degree(a) ((a) * 57.295779513082)
But you should stick to inline functions. See these links from the C++ FAQ Lite for more examples of evil macros and their subtleties:
inline vs. macros
macros containing if
macros with multiple lines
macros used to paste two tokens together
The compiler's preprocessor is a finnicky thing, and therefore a terrible candidate for clever tricks. As others have pointed out, it's easy to for the compiler to misunderstand your intention with the macro, and it's easy for you to misunderstand what the macro will actually do, but most importantly, you can't step into macros in the debugger!
Macros are evil because you may end up passing more than a variable or a scalar to it and this could resolve in an unwanted behavior (define a max macro to determine max between a and b but pass a++ and b++ to the macro and see what happens).
If your function is going to be inlined anyway, there is no performance difference between a function and a macro. However, there are several usability differences between a function and a macro, all of which favor using a function.
If you build the macro correctly, there is no problem. But if you use a function, the compiler will do it correctly for you every time. So using a function makes it harder to write bad code.