Short version:
I want to be able to define assembler macros in a macros.S and use them from inside asm() statements in GNU C.
I can do this with asm(".include \"macros.S\""); near the top of my C source, but I want macros.S to go through the C preprocessor.
Long version:
In GCC asm, *.S files are preprocessed by the C preprocessor, allowing use of C style #define, etc.
In GCC C, you can include an asm header file (which may include asm macro definitions, .set declarations, etc), by writing asm(".include \"myasmheader.S\""); near the top of a file.
Including an ASM header file in this manner allows you to use asm macros inside asm blocks.
Unfortunately, doing so does not invoke the C preprocessor on the .S file being included (as the .include is done later in the compilation process), and so #defines are no longer substituted.
So is there any way to properly include a .S file inside of a C file?
Some other compilers support:
#asm
#include "myasmheader.S"
#endasm
Which would not exhibit such a problem. But alas, GCC seems to require that all asm inside of a C file is in the form of strings.
Short of not using asm (not an option, embedded DSP project that heavily mixes asm and c), or removing use of the C preprocessor in ASM files, what can be done?
From the comments:
Add preprocessing of the ASM file (via cpp) as a distinct build step into whatever build system you're using.
Credits to arrowd and Ped7g.
Related
I have a bunch of magic numbers that I would like to include in both a C program and an assembly file to be compiled by nasm or yasm.
In plain C the file would look something a series of defines, like:
#define BLESS 55378008
#define ANSWER 42
...
In nasm or yasm, the same include could be implemented as:
%define BLESS 55378008
%define ANSWER 42
...
The only difference is that leading character before the define: # for C and % for nasm.
Is there any way to write a polygot include that allows me to include it in both C and nasm and only list the constants once?
Yes, I'm aware that I could just use sed or whatever to generate one file from the other.
NASM by itself has no way to include C header files in assembly code. This has been brought up in the NASM forum through the years. You will need an external tool to parse the C header files into something usable with NASM assembly syntax.
One such 3rd party contribution that is suppose to be compatible with NASM is h2incn. I haven't tested it thoroughly enough so can't say it is stable or usable enough for all use cases.
The alternative is to pre-process the files with other tools like m4, cpp, or even translating with sed
I would like to execute some assembly instructions based on a define from a header file.
Let's say in test.h I have #define DEBUG.
In test.asm I want to check somehow like #ifdef DEBUG do something...
Is such thing possible? I was not able to find something helpful in the similar questions or online.
Yes, you can run the C preprocessor on your asm file. Depends on your build environment how to do this. gcc, for example, automatically runs it for files with extension .S (capital). Note that whatever you include, should be asm compatible. It is common practice to conditionally include part of the header, using #ifndef ASSEMBLY or similar constructs, so you can have C and ASM parts in the same header.
The C preprocessor is just a program that inputs data (C source files), transforms it, and outputs data again (translation units).
You can run it manually like so:
gcc -E < input > output
which means you can run the C preprocessor over .txt files, or latex files, if you want to.
The difficult bit, of course, is how you integrate that in your build system. This very much depends on the build system you're using. If that involves makefiles, you create a target for your assembler file:
assembler_file: input_1 input_2
gcc -E < $^ > $#
and then you compile "assembler_file" in whatever way you normally compile it.
Sure but that is no longer assembly language, you would need to feed it through a C preprocessor that also knows that this is a hybrid C/asm file and does the c preprocessing part but doesnt try to compile, it then feeds to to the assembler or has its own assembler built in.
Possible, heavily depends on your toolchain (either supported or not) but IMO leaves a very bad taste, YMMV.
If I define a constant in my C .h file:
#define constant 1
How do I access it in my assembly .s file?
If you use the GNU toolchain, gcc will by default run the preprocessor on files with the .S extension (uppercase 'S'). So you can use all cpp features in your assembly file.
There are some caveats:
there might be differences in the way the assembler and the preprocessor tokenize the input.
If you #include header files, they should only contain preprocessor directives, not C stuff like function prototypes.
You shouldn't use # comments, as they would be interpreted by the preprocessor.
Example:
File definitions.h
#define REGPARM 1
File asm.S
#include "definitions.h"
.text
.globl relocate
.align 16
.type relocate,#function
relocate:
#if !REGPARM
movl 4(%esp),%eax
#endif
subl %ecx,%ecx
...
Even if you don't use gcc, you might be able to use the same approach, as long as the syntax of your assembler is reasonably compatible with the C preprocessor (see caveats above). Most C compilers have an option to only preprocess the input file (e.g. -E in gcc) or you might have the preprocessor as a separate executable. You can probably include this preprocessing prior to assembly in your build tool.
You can't, unless a specific development chain allows it. But in 20 years or so of embedded programming I never saw one.
Usually, the only way for assembly and C to communicate is the linker, i.e. labels defined in C/C++ are accessable from within assembly (and vice versa).
When I had to share definitions between C/C++ and asm, I usually did it with a custom code generator.
Since high-level data are rarely exchanged with assembly, a few defines and maybe some external references are usually enough, and thus the code generator is really easy to make.
You can use for instance perl or awk to parse a very simple list of common constants and produce a pair of files, one with #defines and the other with the equivalent EQU directives.
Sometimes I see someone compile a C program like this:
gcc -o hello hello.c hello.h
As I know, we just need to put the header files into the C program like:
#include "somefile"
and compile the C program: gcc -o hello hello.c.
When do we need to compile the header files or why?
Firstly, in general:
If these .h files are indeed typical C-style header files (as opposed to being something completely different that just happens to be named with .h extension), then no, there's no reason to "compile" these header files independently. Header files are intended to be included into implementation files, not fed to the compiler as independent translation units.
Since a typical header file usually contains only declarations that can be safely repeated in each translation unit, it is perfectly expected that "compiling" a header file will have no harmful consequences. But at the same time it will not achieve anything useful.
Basically, compiling hello.h as a standalone translation unit equivalent to creating a degenerate dummy.c file consisting only of #include "hello.h" directive, and feeding that dummy.c file to the compiler. It will compile, but it will serve no meaningful purpose.
Secondly, specifically for GCC:
Many compilers will treat files differently depending on the file name extension. GCC has special treatment for files with .h extension when they are supplied to the compiler as command-line arguments. Instead of treating it as a regular translation unit, GCC creates a precompiled header file for that .h file.
You can read about it here: http://gcc.gnu.org/onlinedocs/gcc/Precompiled-Headers.html
So, this is the reason you might see .h files being fed directly to GCC.
Okay, let's understand the difference between active and passive code.
The active code is the implementation of functions, procedures, methods, i.e. the pieces of code that should be compiled to executable machine code. We store it in .c files and sure we need to compile it.
The passive code is not being execute itself, but it needed to explain the different modules how to communicate with each other. Usually, .h files contains only prototypes (function headers), structures.
An exception are macros, that formally can contain an active pieces, but you should understand that they are using at the very early stage of building (preprocessing) with simple substitution. At the compile time macros already are substituted to your .c file.
Another exception are C++ templates, that should be implemented in .h files. But here is the story similar to macros: they are substituted on the early stage (instantiation) and formally, each other instantiation is another type.
In conclusion, I think, if the modules formed properly, we should never compile the header files.
When we include the header file like this: #include <header.h> or #include "header.h" then your preprocessor takes it as an input and includes the entire file in the source code. the preprocessor replaces the #include directive by the contents of the specified file.
You can check this by -E flag to GCC, which generates the .i (information file) temporary file or can use the cpp(LINUX) module specifically which is automatically used by the compiler driver when we execute GCC.
So its actually going to compile along with your source code, no need to compile it.
In some systems, attempts to speed up the assembly of fully resolved '.c' files call the pre-assembly of include files "compiling header files". However, it is an optimization technique that is not necessary for actual C development.
Such a technique basically computed the include statements and kept a cache of the flattened includes. Normally the C toolchain will cut-and-paste in the included files recursively, and then pass the entire item off to the compiler. With a pre-compiled header cache, the tool chain will check to see if any of the inputs (defines, headers, etc) have changed. If not, then it will provide the already flattened text file snippets to the compiler.
Such systems were intended to speed up development; however, many such systems were quite brittle. As computers sped up, and source code management techniques changed, fewer of the header pre-compilers are actually used in the common project.
Until you actually need compilation optimization, I highly recommend you avoid pre-compiling headers.
I think we do need preprocess(maybe NOT call the compile) the head file. Because from my understanding, during the compile stage, the head file should be included in c file. For example, in test.h we have
typedef enum{
a,
b,
c
}test_t
and in test.c we have
void foo()
{
test_t test;
...
}
during the compile, i think the compiler will put the code in head file and c file together and code in head file will be pre-processed and substitute the code in c file. Meanwhile, we'd better to define the include path in makefile.
You don't need to compile header files. It doesn't actually do anything, so there's no point in trying to run it. However, it is a great way to check for typos and mistakes and bugs, so it'll be easier later.
Say I have a constant:
#define PI 3.14
Say I have a static library with multiple header and source files. If I declare this in the header file, will its scope apply to all of the source files? Or do the source files need to include the header with the declaration of PI?
They will need to include the file which contains #define PI 3.14, otherwise the preprocessor will not read the #define line, and subsequently the compile will fail.
In C++, a good way to think of the compile process is that each individual C++ file is first run through a preprocessor, which takes all the #define, #include, and other preprocessor statements and replaces them throughout the code, then compiled (at this point, the C++ file and anything brought in via #include treated almost as if they were one very large single file), then after that, a linker takes the final output of the preprocess/compile stage for all of the C++ files and assembles them into one final output file. The preprocessor (Which handles the defines) works before the compile stage, not during linkage.
The definition has to be included in each module.
Technically, it has no "scope". It is only a text replacement operation that happens prior to compilation. You could also look into your compiler settings for a way to specify pre-processor definitions. This is often a project setting available easily through your IDE.
They will need to include the define, however if you need a define across all files you can do a compiler level switch.