I am developing OpenCL code on Snow Leopard and understand that the OpenCL just-in-time compilation is done by Clang/LLVM. Is the C preprocessor used at all? Is there a way to set preprocessing definitions with the compiler? What definitions exist?
I would like the code to be aware of whether it is compiled for CPU or GPU so I for instance can use printf statements for debugging.
the clBuildProgram API takes compiler arguments (the const char * options parameter).
-D MYMACRO is understood, so is -D MYMACRO=value.
As to what predefined macros, see the OpenCL specification for a full list (Section 6.9). A non exhaustive list:
__FILE__
__LINE__
__OPENCL_VERSION__
You can also use the OpenCL "preprocessor" to define definitions (like in C):
#define dot3(x1, y1, z1, x2, y2, z2) ((x1)*(x2) + (y1)*(y2) + (z1)*(z2))
(notice the brackets, they are important because you can insert any Expression in the Variables and the expression gets correctly evaluated)
This helps to improve the speed of your Application.
Related
I take a project which written in c, and there are lots of Macros.
I want to use a new macro to check if the macro is activated or not.
But the symbol # is reserved in macro. How to fix my code? Thanks :)
#define CHECK_MACRO( macro )\
#ifdef macro
printf("defined "#macro"\n");\
#else
printf("not defined "#macro"\n");\
#endif
You cannot use preprocessor conditional directives inside a macro. Generally speaking, the solution is to turn that inside out: use conditional directives to define the macro differently in different cases. That will not work for a generic macro-test macro such as you propose, however, and it also is limited by the fact that it determines whether the condition holds at the point where the macro is defined, not the point where it is used.
You may perhaps take consolation in the fact that this was never going to work anyway, as a result of the fact the arguments to a function-like macro are expanded before being substituted into the macro's replacement text (except in a couple of special cases that don't apply to the key part of your code).
There are alternatives that could work if the possible values of all macros of interest are limited to short lists of tokens that may appear as or in identifiers. There different alternatives that might be adequate if you can choose a small subset of macros that you're interested in testing. There are no alternatives that do what you propose in its full generality, unless you count writing the conditional compilation directives directly, without a macro, which in fact is the usual way of going about it.
Side note - m4 preprocssor - history/legacy.
In the early days of Unix, the 'm4' processor was used for code generation. It has enhanced features of cpp (or may be cpp is a scaled down version of m4). Specially, it has better support for multi-line macros. It continue to be used in various packages.
Worth mentioning that adding a code generation to your code will make it more complex to maintain/debug.
For example: a.m4
define(`CHECK_MACRO', `
#ifdef $1
printf ("defined #$1\n") ;
#else
printf ("undef #$1\n") ;
#endif
')
#include <stdio.h>
void main(void)
{
CHECK_MACRO(FOO) ;
CHECK_MACRO(BAR) ;
}
Then build/run
m4 a.m4 > a.c
cc a.c
./a.out
undef #FOO
undef #BAR
cc a.c -DFOO
./a.out
defined #FOO
undef #BAR
Usually, the generation was integrated into Makefile with a rule
%.c: %.m4:
m4 -s $< > $#
The -s help track source code line number (it will compile error line number matching the a.m4 source file.
I have a long formula, like the following:
float a = sin(b)*cos(c)+sin(c+d)*sin(d)....
Is there a way to use s instead of sin in C, to shorten the formula, without affecting the running time?
There are at least three options for using s for sin:
Use a preprocessor macro:
#define s(x) (sin(x))
#define c(x) (cos(x))
float a = s(b)*c(c)+s(c+d)*c(d)....
#undef c
#undef s
Note that the macros definitions are immediately removed with #undef to prevent them from affecting subsequent code. Also, you should be aware of the basics of preprocessor macro substitution, noting the fact that the first c in c(c) will be expanded but the second c will not since the function-like macro c(x) is expanded only where c is followed by (.
This solution will have no effect on run time.
Use an inline function:
static inline double s(double x) { return sin(x); }
static inline double c(double x) { return cos(x); }
With a good compiler, this will have no effect on run time, since the compiler should replace a call to s or c with a direct call to sin or cos, having the same result as the original code. Unfortunately, in this case, the c function will conflict with the c object you show in your sample code. You will need to change one of the names.
Use function pointers:
static double (* const s)(double) = sin;
static double (* const c)(double) = cos;
With a good compiler, this also will have no effect on run time, although I suspect a few more compilers might fail to optimize code using this solution than than previous solution. Again, you will have the name conflict with c. Note that using function pointers creates a direct call to the sin and cos functions, bypassing any macros that the C implementation might have defined for them. (C implementations are allowed to implement library function using macros as well as functions, and they might do so to support optimizations or certain features. With a good quality compiler, this is usually a minor concern; optimization of a direct call still should be good.)
if I use define, does it affect runtime?
define works by doing text-based substitution at compile time. If you #define s(x) sin(x) then the C pre-processor will rewrite all the s(x) into sin(x) before the compiler gets a chance to look at it.
BTW, this kind of low-level text-munging is exactly why define can be dangerous to use for more complex expressions. For example, one classic pitfall is that if you do something like #define times(x, y) x*y then times(1+1,2) rewrites to 1+1*2, which evaluates to 3 instead of the expected 4. For more complex expressions like it is often a good idea to use inlineable functions instead.
Don't do this.
Mathematicians have been abbreviating the trigonometric functions to sin, cos, tan, sinh, cosh, and tanh for many many years now. Even though mathematicians (like me) like to use their favourite and often idiosyncratic notation so puffing up any paper by a number of pages, these have emerged as pretty standard. Even LaTeX has commands like \sin, \cos, and \tan.
The Japanese immortalised the abbreviations when releasing scientific calculators in the 1970s (the shorthand can fit easily on a button), and the C standard library adopted them.
If you deviate from this then your code immediately becomes difficult to read. This can be particularly pernicious with mathematical code where you can't immediately see the effects of a bad implementation.
But if you must, then a simple
static double(*const s)(double) = sin;
will suffice.
I wanted to see how the arduino function digitalWrite actually worked. But when I looked for the source for the function it was full of macros that were themselves defined in terms of other macros. Why would it be built this way instead of just using a function? Is this just poor coding style or the proper way of doing things in C?
For example, digitalWrite contains the macro digitalPinToPort.
#define digitalPinToPort(P) ( pgm_read_byte( digital_pin_to_port_PGM + (P) ) )
And pgm_read_byte is a macro:
#define pgm_read_byte(address_short) pgm_read_byte_near(address_short)
And pgm_read_byte_near is a macro:
#define pgm_read_byte_near(address_short) __LPM((uint16_t)(address_short))
And __LPM is a macro:
#define __LPM(addr) __LPM_classic__(addr)
And __LPM_classic__ is a macro:
#define __LPM_classic__(addr) \
(__extension__({ \
uint16_t __addr16 = (uint16_t)(addr); \
uint8_t __result; \
__asm__ __volatile__ \
( \
"lpm" "\n\t" \
"mov %0, r0" "\n\t" \
: "=r" (__result) \
: "z" (__addr16) \
: "r0" \
); \
__result; \
}))
Not directly related to this, but I was also under the impression that double underscore should only be used by the compiler. Is having LPM prefixed with __ proper?
If your question is "why one would use multiple layers of macros?", then:
why not? Notably in the pre-C99 era without inline. A canonical example was 1980s era getc which was (IIRC) at that time (SunOS3.2, 1987) documented as being a macro, with a NOTE in the man page telling (I forgot the details) that with some FILE* filearr[]; a getc(filearr[i++]) was wrong (IIRC, the undefined behavior terminology did not exist at that time). When you looked into some system headers (e.g. <stdio.h> or some header included by it) you could find the definition of such macro. And at that time (with computers running only at a few MHz, so thousand times slower than today) getc has to be a macro for efficiency reasons (since inline did not exist, and compilers did no interprocedural optimizations like they are able to do now). Of course, you could use getc in your own macros.
even today, some standards define macros. In particular today's waitpid(2) system call document WIFEXITED & WEXITSTATUS as macros, and it is sensible to #define some of your macros mixing both of them.
the main point is to understand the working of the C preprocessor, and its profoundly textual (so very brittle) nature. This is explained in all textbooks about C. Hence you need to understand what is happening under the hoods.
the rule of thumb with modern era C (that is C99 & C11) is to systematically prefer having some static inline function (defined in some header file) to an equivalent macro. In other words, #define some macro only when you cannot avoid it. And explicitly document that fact.
several layers of macro might (sometimes) increase code readability.
macros can be tested with #ifdef macroname which is sometimes useful.
Of course, when you dare defining several layers of macros (I won't call them "recursive", read about self-referential macros) you need to be very careful and understand what is happening with all of them (combined & separately). Looking into the preprocessed form is helpful.
BTW, to debug complex macros, I sometimes do
gcc -C -E -I/some/directory mysource.c | sed 's:^#://#:' > mysource.i
then I look into the mysource.i and sometimes even have to compile it, perhaps as gcc -c -Wall mysource.i to get warnings which are located in the preprocessed form mysource.i (which I can examine in my emacs editor). The sed command is "commenting" the lines starting with # which are setting source location (à la #line).... Sometimes I even do indent mysource.i
(actually, I have a special rule for that in my Makefile)
Is having LPM prefixed with __ proper?
BTW, names starting with _ are (by the standard, and conventionally) reserved to the implementation. In principle you are not allowed to have your names starting with _ so there cannot be any collision.
Notice that the __LPM_classic__ macro uses the statement-expr extension
(of GCC & Clang)
Look also at other programming languages. Common Lisp has a very different model of macros (and a much more interesting one). Read about hygienic macros. My personal regret is that the C preprocessor has never evolved to be much more powerful (and Scheme-like). Why that did not happen (imagine a C preprocessor able to invoke Guile code to do the macro expansion!) is probably for sociological & economical reasons.
You still should sometimes consider using some other preprocessor (like m4 or GPP) to generate C code. See autoconf.
Why would it be built this way instead of just using a function?
The purpose is likely to optimize functions' calls using pre-C99 compilers, that don't support inline functions (via inline keyword). This way, the whole stack of function-like macros is essentially merged by preprocessor into single block of code.
Every time you call a function in C, there is a tiny overhead, because of jumping around program's code, managing the stack frame and passing arguments. This cost is negligible in most applications, but if the function is called very often, then it may become a performance bottleneck.
Is this just poor coding style or the proper way of doing things in C?
It's hard to give definitive answer, because coding style is subjective topic. I would say, consider using inline functions or even (better) let the compiler inline them by itself. They are type-safe, more readable, more predictable, and with the proper assistance from compiler, the net result is essentially the same.
Related reference (it's for C++, but the idea is generally the same for C):
Why should I use inline functions instead of plain old #define macros?
I'm currently using the __COUNTER__ macro in my C library code to generate unique integer identifiers. It works nicely, but I see two issues:
It's not part of any C or C++ standard.
Independent code that also uses __COUNTER__ might get confused.
I thus wish to implement an equivalent to __COUNTER__ myself.
Alternatives that I'm aware of, but do not want to use:
__LINE__ (because multiple macros per line wouldn't get unique ids)
BOOST_PP_COUNTER (because I don't want a boost dependency)
BOOST_PP_COUNTER proves that this can be done, even though other answers claim it is impossible.
In essence, I'm looking for a header file "mycounter.h", such that
#include "mycounter.h"
__MYCOUNTER__
__MYCOUNTER__ __MYCOUNTER__
__MYCOUNTER__
will be preprocessed by gcc -E to
(...)
0
1 2
3
without using the built-in __COUNTER__.
Note: Earlier, this question was marked as a duplicate of this, which deals with using __COUNTER__ rather than avoiding it.
You can't implement __COUNTER__ directly. The preprocessor is purely functional - no state changes. A hidden counter is inherently impossible in such a system. (BOOST_PP_COUNTER does not prove what you want can be done - it relies on #include and is therefore one-per-line only - may as well use __LINE__. That said, the implementation is brilliant, you should read it anyway.)
What you can do is refactor your metaprogram so that the counter could be applied to the input data by a pure function. e.g. using good ol' Order:
#include <order/interpreter.h>
#define ORDER_PP_DEF_8map_count \
ORDER_PP_FN(8fn(8L, 8rec_mc(8L, 8nil, 0)))
#define ORDER_PP_DEF_8rec_mc \
ORDER_PP_FN(8fn(8L, 8R, 8C, \
8if(8is_nil(8L), \
8R, \
8let((8H, 8seq_head(8L)) \
(8T, 8seq_tail(8L)) \
(8D, 8plus(8C, 1)), \
8if(8is_seq(8H), \
8rec_mc(8T, 8seq_append(8R, 8seq_take(1, 8L)), 8C), \
8rec_mc(8T, 8seq_append(8R, 8seq(8C)), 8D) )))))
ORDER_PP (
8map_count(8seq( 8seq(8(A)), 8true, 8seq(8(C)), 8true, 8true )) //((A))(0)((C))(1)(2)
)
(recurses down the list, leaving sublist elements where they are and replacing non-list elements - represented by 8false - with an incrementing counter variable)
I assume you don't actually want to simply drop __COUNTER__ values at the program toplevel, so if you can place the code into which you need to weave __COUNTER__ values inside a wrapper macro that splits it into some kind of sequence or list, you can then feed the list to a pure function similar to the example.
Of course a metaprogramming library capable of expressing such code is going to be significantly less portable and maintainable than __COUNTER__ anyway. __COUNTER__ is supported by Intel, GCC, Clang and MSVC. (not everyone, e.g. pcc doesn't have it, but does anyone even use that?) Arguably if you demonstrate the feature in use in real code, it makes a stronger case to the standardisation committee that __COUNTER__ should become part of the next C standard.
You are confusing two different things:
1 - the preprocessor which handles#define and #include like stuff. It does only works as the text (meaning character sequences) level and has very few computing capabilities. It is so limited that it cannot implement __COUNTER__. The preprocessor work consist only in macro expansion and file replacement. The crucial point it that it occur before the compilation even start.
2 - the C++ language and in particular the template (meta)programming language which can be used to compute stuff during the compilation phase. It is indeed turing complete but as I already said compilation start after preprocessing.
So what you are asking is not doable in standard C or C++. To solve this problem boost implement its own preprocessor which is not standard compliant and has much more computing capabilities. In particular it is possible to use build an analogue to __counter__ with it.
This small header of mine contains an own implementation of a C preprocessor counter (it uses a slightly different syntax).
I'm trying to pass the value of a variable to a macro in C, but I don't know if this is possible. Example:
#include <stdio.h>
#define CONCVAR(_n) x ## _n
int main () {
int x0, x1, x2, x3, x4, x5, x6, x7, x8, x9;
int i;
for (i = 0; i <= 9; i++) CONCVAR(i) = i*5;
return 0;
}
Here, I'm trying to use a macro to assign a value to all x_ variables, using ## tokens. I know I can easily achieve this with arrays, but this is for learning purposes only.
CONCVAR(i) is substituted to xi, not x1 (if i == 1). I know how defines and macro work, it's all about substitution, but I want to know if it is possible to pass the value of i instead the letter i to a macro.
Substituting the value of i into the macro is impossible, since macro substitutions happen before your code is compiled. If you're using GCC, you can see the pre-processor output by adding the '-E' command line argument (Note however, that you'll see all the #include's inserted in your code.)
C is a static language and you can not decide symbol names at runtime. However, what you're trying to achieve is possible if you use an array and refer to elements using subscripts. As a rule of thumb, if you have many variables like x0, x1, etc, you should probably be using a container like an array.
No, because the value of i only exists at run-time. Macro expansion happens at compile-time.
No, it won't work. The C/C++ pre-processor is just that, a "pre-compile" time text processor. As such, it operates on the text found as-is in your source code.
That's why, it takes the literal text "i", pass it into your macro, expanding that into the literal text "xi" in your source code. Then that gets passed into the compiler. The compiler then starts parsing the post-processed text, finding the literal token "xi" as an undeclared variable, going belly up in the process.
You can take your sample source code and pass it to a gcc compiler (I used gcc under cygwin for example, pasting your code into a file I named pimp.c for lack of a better name). Then you'll get the following:
$ gcc pimp.c
pimp.c: In function `main':
pimp.c:9: error: `xi' undeclared (first use in this function)
pimp.c:9: error: (Each undeclared identifier is reported only once
pimp.c:9: error: for each function it appears in.)
In short, no, you cannot do that. To be able to do just that, then the pre-processor would have to act as an interpreter. C and C++ are (generally) not interpreted languages, and the pre-processor is not an interpreter. My suggestion would be to get very clear on the differences between compilers and interpreters (and between compiled and interpreted languages.)
Regards.