This sounds like something I should be able to Google, but I can't find a good reference. What exactly does __attribute__((force)) do? As in:
return (__attribute__((force)) uint32_t) *p
(this is for an ARM system, cross compiled with clang, but I can't find any reference anywhere on this attribute even in clang/arm specific pages..).
__attribute__((force)) is used to suppress warnings in sparse, the c semantics checker of the linux kernel.
The wikipedia article of sparse lists the following attributes as defined by sparse:
address_space(num)
bitwise
force
context(expression,in_context,out_context)
If you need more information about these, you can take a look at the man page of sparse
In the linux kernel some these attributes are redefined in linux/include/linux/compiler_types.h. For example __force expands to __attribute__((force)) or __bitwise to __attribute__((bitwise)).
However the linux documentation about sparse tells us, that gcc ignores these attributes:
And with gcc, all the “__bitwise”/”__force stuff” goes away, and it all ends up looking just like integers to gcc.
Related
I found the following snippet of code .
#define f(g,g2) g##g2
main() {
int var12=100;
printf("%d",f(var,12));
}
I understand that this will translate f(var,12) into var12 .
My question is in the macro definition, why didn't they just write the following :
#define f(g,g2) gg2
why do we need ## to concatenate text, rather than concatenate it ourselves ?
If one writes gg2 the preprocessor will perceive that as a single token. The preprocessor cannot understand that that is the concatenation of g and g2.
#define f(g,g2) g##g2
My opinion is that this is poor unreadable code. It needs at least a comment (giving some motivation, explanation, etc...), and a short name like f is meaningless.
My question is in the macro definition, why didn't they just write the following :
#define f(g,g2) gg2
With such a macro definition, f(x,y) would still be expanded to the token gg2, even if the author wanted the expansion to be xy
Please take time to read e.g. the documentation of GNU cpp (and of your compiler, perhaps GCC) and later some C standard like n1570 or better.
Consider also designing your software by (in some cases) generating C code (inspired by GNU bison, or GNU m4, or GPP). Your build machinery (e.g. your Makefile for GNU make) would process that as you want. In some cases (e.g. programs running for hours of CPU time), you might consider doing some partial evaluation and generating specialized code at runtime (for example, with libgccjit or GNU lightning). Pitrat's book on Artificial Beings, the conscience of a conscious machine explains and arguments that idea in an entire book.
Don't forget to enable all warnings and debug info in your compiler (e.g. with GCC use gcc -Wall -Wextra -g) and learn to use a debugger (like GNU gdb).
On Linux systems I sometimes like to generate some (more or less temporary) C code at runtime (from some kind of abstract syntax tree), then compile that code as a plugin, and dlopen(3) that plugin then dlsym(3) inside it. For a stupid example, see my manydl.c program (to demonstrate that you can generate hundreds of thousands of C files and plugins in the same program). For serious examples, read books.
You might also read books about Common Lisp or about Rust; both have a much richer macro system than C provides.
I want to use GCC built-in functions like __sync_fetch_and_add, but I compile my code with Keil, if I use these functions in my code, it will be show err like this:
Error: L6218E: Undefined symbol __sync_fetch_and_add_4 (referred from XXXX.o).
I found there are some describes with GNU atomic memory access functions in Keil's documents, so I guess that keil may be support these functions, but I don't know how to use them. Should I include some header files or add some config in keil?
I'm no expert, but the link seems to be about ARM DS-5, which is a separate compiler i.e. not the same as Keil's MDK. So the documentation doesn't apply.
Implementing those functions is not super hard; if all else fails I would look at the compiler output from GCC, and just re-implement it.
Alternatively read up on the LDREX/STREX instructions and those for memory barriers, and have fun! :)
UPDATE: I think __sync_fetch_and_add_4() is new, but Keil only supports GCC's older suite of built-ins. Notice that __sync_fetch_and_add_4 does not appear in the list of functions they say that they support. This GCC manual page says:
Prior to GCC 4.7 the older __sync intrinsics were used. An example of an undefined symbol from the use of __sync_fetch_and_add on an unsupported host is a missing reference to __sync_fetch_and_add_4.
So it seems Keil is tracking a pretty old version of GCC? On the other hand, I do see __sync_fetch_and_add() in the list, and I guess that "magically" generates a call to __sync_fetch_and_add_4() if used on a 32-bit quantity. Weird.
I want to delve into the implementation of function "printf" in C on macOS. "printf" uses the <stdarg.h> header file. I open the <stdarg.h> file and find that va_list is just a macro.
So, I am really curious about how the __builtin_va_list is implemented? I know it is compiler-specific. Where can I find the definition of the __builtin_va_list? Should I download the source code of clang compiler?
So, I am really curious about how the __builtin_va_list is implemented?
__builtin_va_list is implemented inside the GCC compiler (or the Clang/LLVM one). So you should study the GCC compiler source code to understand details.
Look into gcc/builtins.def & gcc/builtins.c for more.
I am less familiar with Clang, which implements the same builtin.
But both GCC & Clang are open source or free software. They are complex beasts (several millions lines of code each), so you could need years of work to understand them.
Be aware that the ABI of your compiler matters. Look for example into X86 psABI for more details.
BTW, Grady Player commented:
Pops the correct number of bytes off of the stack for each of those tokens...
Unfortunately, today it is much more complex than that. On current processors and ABIs the calling conventions do use processor registers to pass some arguments (and the evil is in the details).
Should I download the source code of clang compiler?
Yes, and you also need to allocate several years of work to understand the details.
A few years ago, I did write some tutorial slides and links to external documentation regarding GCC implementation, see my GCC MELT documentation page (a bit rotten).
In Clang 9, this is implemented in
clang\lib\AST\ASTContext.cpp
call graph:
getVaListTagDecl
=>getBuiltinVaListDecl
=>CreateVaListDecl
=>Create***BuiltinVaListDecl
for example:
=>CreateCharPtrBuiltinVaListDecl
=>CreateCharPtrNamedVaListDecl
=>buildImplicitTypedef
When there is __builtin_va_list in the preprocessed source, the compiler calls getVaListTagDecl to build a TypedefDecl AST node and insert it into the AST, the typedef doesn't exist in any source code, it is generated dynamically during build, as if there is such in the source:
typedef *** __builtin_va_list;
//for example
typedef char* __builtin_va_list;
This answer, for clang, just show how I find implementation of a builtin function.
I'm interested in implementation of std::atomic<T>. If T is not a trivial type, clang use a lock to guard its atomicity. Look this answer first, I find a builtin function named __c11_atomic_store. The question is, how this builtin function implemented in clang?
Searching Builtin in clang codebase, find in clang/Basic/Builtins.def:
// Some of our atomics builtins are handled by AtomicExpr rather than
// as normal builtin CallExprs. This macro is used for such builtins.
#ifndef ATOMIC_BUILTIN
#define ATOMIC_BUILTIN(ID, TYPE, ATTRS) BUILTIN(ID, TYPE, ATTRS)
#endif
// C11 _Atomic operations for <stdatomic.h>.
ATOMIC_BUILTIN(__c11_atomic_init, "v.", "t")
ATOMIC_BUILTIN(__c11_atomic_load, "v.", "t")
ATOMIC_BUILTIN(__c11_atomic_store, "v.", "t")
ATOMIC_BUILTIN(__c11_atomic_exchange, "v.", "t")
...
The keyword are AtomicExpr and CallExpr. Then I check every caller of AtomicExpr's constructor, but doesn't find any useful information. So I guess, maybe in parse phase, if parser match an builtin function calling, it will construct an CallExpr to AST with builtin flag. In code generate phase, it will emit the implementation.
Check CodeGen, I find the answer in lib/CodeGen/CGBuiltin.cpp and CodeGen/CGAtomic.cpp.
You can check CodeGenFunction::EmitVAArg, I holp that would be useful for you.
I was searching for this in the mplab compiler users guide but haven't found anything. I am asking it here to confirm that I am not blind or anything:
The GCC compiler provides some very interesting and useful built-in functions like __builtin_constant_p(x) or similar stuff. I have never found anything like that in the microchip compilers and I don't think there is.
So the question: Do Microchip XCxx Compilers provide any non-standard built-in functions apart from the device specific ones (like declaring variables at a given register address or declaring an interrupt function)?
EDIT: To clarify some more: I am mostly interested in retrieving information from the compiler. A good example would be something like builtin_constant, as it makes information available to the program which is normally not usable. But I do not limit this question to find constant expressions only.
XC16 manual in google and out rolls: http://ww1.microchip.com/downloads/en/DeviceDoc/50002071E.pdf appendix G.
The same document mentioned by #Marco van de Voort has a list of pre-defined macros in section 19.4 that give you information about the compiler environment and the device.
There is also the somewhat undocumented __DEBUG macro which is defined when running under MPLABX in debug mode (MPLABX defines this in the call to the compiler).
These are the builtins supported by the XC16 compiler
e.g. __builtin_add
For a complete description of the builtins see the MPLAB XC16 compiler user's manual (under "docs" folder of compiler installation) or here: http://www.microchip.com/mymicrochip/filehandler.aspx?ddocname=en559023
This question : Is there a way to tell whether code is now being compiled as part of a PCH? lead me to thinking about this.
Is there a way, in perhaps only certain compilers, of getting a C/C++ compiler to dump out the defines that it's currently using?
Edit: I know this is technically a pre-processor issue but let's add that within the term compiler.
Yes. In GCC
g++ -E -dM <file>
I would bet it is possible in nearly all compilers.
Boost Wave (a preprocessor library that happens to include a command line driver) includes a tracing capability to trace macro expansions. It's probably a bit more than you're asking for though -- it doesn't just display the final result, but essentially every step of expanding a macro (even a very complex one).
The clang preprocessor is somewhat similar. It's also basically a library that happens to include a command line driver. The preprocessor defines a macro_iterator type and macro_begin/macro_end of that type, that will let you walk the preprocessor symbol table and do pretty much whatever you want with it (including printing out the symbols, of course).