How do you add/specify predefined macros for GCC? [closed] - c

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
important thing: -D does not apply here.
Is it possible to declare macros that appear in every compilation (much like predefined macros) in some dynamic manner (meaning I'm lazy to recompile gcc)? or do I have to recompile my gcc? Should I have to recompile, how do I specify my predefined macros?

You might consider providing some (or improving yours) spec file.
You could patch the gcc/c-family/c-cppbuiltin.c file of the source code of GCC.
You could code then use a GCC plugin defining additional predefined macros.
But I am sure it is a very bad idea; I recommend instead passing explicitly some -D flag to your compiler; your question smells badly as some XY problem. You need to motivate your question.
You could instead organize your PATH variable and add appropriately some gcc shell script adding that -DMACRO option and explicitly invoking e.g. /usr/bin/gcc with it.

On Linux you can
use an alias:
alias gcc="gcc -DMACRO1 -DMACRO2"
Copy old /usr/bin/gcc to /usr/bin/gcc.original. Make your own shell script and name it /usr/bin/gcc, inside which you have
exec /usr/bin/gcc.original -DMACRO1 -DMACRO2 "$#"

Related

Configure $(CC) to warn when inclusion is safe to be removed [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 4 years ago.
Improve this question
Read this source code doc:
// foo.c
#include<stdint.h>
main(){}
I can do this:
$ gcc -Wno-implicit-int foo.c
$ ./a.out
$ gcc -dumpversion
6.3.0
GCC compiles without warnings.
Let's modify the source:
// foo.c
main(){}
But the same happens:
$ gcc -Wno-implicit-int foo.c
$ ./a.out
$ gcc -dumpversion
6.3.0
The output is the same. I want to believe that this means the inclusion can be removed safely.
Can I configure GCC in order to warn such inclusion can be safely removed?
What about the same for LLVM?
Is it costly for the compiler to figure out?
Would you ever consider activating the feature?
Expanding on Peter's comment, I'm going address the third question, regarding the cost of this. TL;DR: this is not functionality that would be trivial to add to a compiler.
Currently, the compiler simply processes the source, line by line, respecting #includes as a means to go and fetch a different source, and insert it at the appropriate place in the input stream. This is all handled by the preprocessor.
It goes as far as to add some special directives (typically #line), so that error messages match up with where they actually happen, but that's about it.
What would be needed to do what the OP is asking for is for every single declaration to have meta data added to it specifying which file it was found in. Then as the source is being processed, it would be necessary to mark every declaration that gets used. Then finally at the end of compilation, the compiler would be have to run over the entire symbol table, to see if any file has the condition that none of the symbols in it were ever used.
That's not a "five line of code" fix, it's going to be a fair sized investment.
And what I've just outlined doesn't begin to deal with nested #includes. Suppose outer.c includes middle.h. Now middle.h doesn't have any symbols in it that are used in outer.c but it does include inner.h that is used. So without saving the "route" to each variable, you risk throwing away middle.h and thus losing inner.h.

implicit dynamic linking vs explicit dynamic linking - which is more effective? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 4 years ago.
Improve this question
There are two ways to link a shared library .
one named implicit dynamic linking and one named explicit dynamic linking.
I have googled some doc not found docs tells the difference on efficiency of the two .
Take a linux .so file as example . my doubt is : the implicit linking compare with the explicit way , will the explicit way cause more IO or cpu or memory somehow ?
Wondering which way is more effective and why ?
thanks a lot !
From what I understand, implicit dynamic linking is the fact of saying that your program needs the library in order to run, by adding the library in the dependency section of your program. If the library isn't found at the start of the program, the program simply won't be executed.
Explicit dynamic linking is using a function like "LoadLibrary" (windows) or "dlopen" (Linux) in order to load a library at runtime. It's exactly what a plugin is, and how you can code it.
Now, doing an explicit dynamic linking is going to add work and complexity, and I don't see any reason for it to be more efficient than an implicit dynamic linking. You use explicit dynamic linking only when you cannot do otherwise, like loading a library depending on some runtime value.

What is the purpose of a makefile? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
What does it do? Do you just run make on the command line? Is the makefile just like a list of commands to execute and at the end of the make command you have a bunch of executable files?
The answer above is pretty correct, but misses an important point: makefiles are intelligent, they are meant to run only the needed commands, not all. This is done with the concept of dependencies between items, like:
to generate A from B, it is necessary to run (for example) "cc -o A B".
These rules/dependencies can be cascaded (to have A you must use B; to have B, you must use C+D+E, to have D you must do ...)
If your project is structured in many files, a makefile will (normally) recreate the objects whose dependencies are changed, not all the objects.
Think of a C project split in 10 files. You modify one, say "main.c" and then want to test the project. With Makefile, only "main.c" gets recompiled and then the final executable gets linked. Without a Makefile, perhaps all the 10 files would get recompiled.
Yes it is essentially just a list of instructions. The purpose of a makefile is to be easily build an executable that might take many commands to create (which would be a pain to compile over and over again manually).

php extension code must be c89 style [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I wrote a php extension: https://github.com/binpack/binpack-php, it works great and I want to submit this extension to PECL.
But they said that my code is C99 style and PHP except C89 style. I read somethings about C99 and C89.
And figure out some difference:
stdbool.h
inline vs __inline__
I think there are some problem in these 2 files:
https://github.com/binpack/binpack-php/blob/master/bin_pack.c
https://github.com/binpack/binpack-php/blob/master/bin_pack.h
I modified some of my code and used -std=gnu89 to test them. But I am not sure if there are still some problems.
My question is:
How can I test if my code is c89 style?
If anyone can point out the problems in my code, that will be great.
It won't warn about every feature not in C89, but
gcc -Wall -pedantic -std=c89 ...
is a good place to start.

hard to understand this macro [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
#define __HAVE_ARCH_STRCPY
What's the meaning of __HAVE_ARCH ? I'm not a native speaker and I fail to find the meaning of it by google...(maybe this question is quite silly)
By defining the __HAVE_ARCH_XXXX pre-processor tokens, it allows other locations in the OS kernel to test if the current hardware platform supports the strcpy, memset, etc. functionality. You'll notice that on some platforms, this token is defined, and then a basic implementation of these functions are defined as inline functions along with the token, since on those platforms, the functionality is not provided by some other kernel library or kernel code module. On other platforms, the functions are defined in some other code module, and may be simply declared as extern just after the pre-processor token.
Keep in mind that the kernel itself in Linux does not have access to the standard libc library, so these functions have to be defined separately from what you would typically use in a user-land application that is linked against libc. Thus it's important to define what standard functions are present, and which ones are not, as it may vary from platform-to-platform.
"This architecture has strcpy()".

Resources