I would like to know how to link a pgc++ compiled code (blabla.a) with a main code compiled with c++ or g++ GNU compiler.
For the moment linking with default gnu c++ linker gives errors like:
undefined reference to `__pgio_initu'
As the previous person already pointed out, PGI supports G++ name mangling when using the pgc++ command. Judging from this output, I'm guessing that you're linking with g++ rather than pgc++. I've had the most success when using pgc++ as the linker so that it finds the PGI libraries. If that's not an option, you can link an executable with pgc++ -dryrun to get the full link line and past the -L and -l options from there to get the same libraries.
Different C++ compilers use different name-mangling conventions
to generate the names that they expose to the linker, so the member function
name int A::foo(int) will be emitted to to the linker by compiler A as one string
of goobledegook, and by compiler B as quite a different string of goobledegook,
and the linker has no way of knowing they refer to the same function. Hence
you can't link object files produced by different C++ compilers unless they
employ the same name-mangling convention (and quite possibly not even then: name-mangling
is just one aspect of ABI compatibility.)
That being said, according to this document,
PGC++ supported name-mangling compatibility with g++ 3-and-half years ago, provided that PGI C++ compiler was invoked with precisely the command pgc++ or pgcpp --gnu. It may be that the library you are dealing with was not built in that specific way, or perhaps was built with an older PGI C++ compiler.
Anyhow, if g++ compiles the headers of your blabla.a and emits different
symbols from the ones in blabla.a, you can't link g++ code with blabla.a.
You'd need to rebuild blabla.a with g++, which perhaps is not an option.
Related
I need to include all symbols from a static library. "-force_load" is good when compiling with Xcode. But, for example, when using it under Ubuntu with gcc, "-force_load" is not recognized. I'm looking for alternative options that can be used under other operating systems. Thanks.
The GNU linker's option is called --whole-archive, but while -force_load applies to one library, --whole-archive applies to all libraries after it on the command line. So the usual thing is to do --whole-archive somelib.a --no-whole-archive.
Usually you don't use ld directly but instead invoke it via GCC, in which case you have to tell GCC to pass the options on to the linker: -Wl,--whole-archive,somelib.a,--no-whole-archive
Is there any situation in which flags such as -ansi, -Wall, and -pedantic might be relevant during the linking part of the process?
What about the -O optimization flags? Are they only relevant during the compile steps or are they also relevant during linking?
Thanks!
In practice, no - but in theory, -ansi is a dialect option, so it could conceivably affect linking. I've seen similar behaviour with older versions of clang that use libc++ or libstdc++, when using C++11 or C++03 respectively. I find it easier to put these flags in the CC variable: CC = gcc -std=c99 or CC = gcc -std=c90 (ansi).
I just invoke C++ (or C) with $CXX or $CC out of habit. And they are passed by default to configure scripts.
I'm not aware of this being an issue with C, as long as the ABI and calling conventions haven't changed. C++, on the other hand, requires changes to the C++ runtime to support new language features. In either case, it's the compiler that invokes the linker with the relevant libraries.
There is link-time optimization in gcc:
-flto[=n]
This option runs the standard link-time optimizer. When invoked
with source code, it generates GIMPLE (one of GCC's internal
representations) and writes it to special ELF sections in the
object file. When the object files are linked together, all the
function bodies are read from these ELF sections and instantiated
as if they had been part of the same translation unit.
To use the link-time optimizer, -flto needs to be specified at
compile time and during the final link.
My native gcc says, that its triplet is the following.
> gcc -dumpmachine
x86_64-suse-linux
Where cpu-vendor-os are correspondingly x86_64, suse, linux. The latter means that glibs is in use(?). When I am doing cross-compiling busybux-based system the compiler triplet is something like avr32-linux-uclibc, where os is 'linux-uclibc', meaning that uclibc is used.
The difference between 'linux-glibc' and 'linux-uclibc' is (AFAIU) in collect2 behavior and libgcc.a content. Either glibc or uclibs are silently linked to the target binary.
The questions is that how is the linux kernel been compiled by the same compilers? As soon as the kernel runs on bare-metal it must not been linked with any kind of user-space libc, and should use appropriate libgcc.a
gcc has all kind of options to control how it works.
Here's a few relevant ones:
-nostdlib to omit linking to the standard libraries and startup code
-nostdinc to omit searching for header files in the standard locations.
-ffreestanding to compile for a freestanding environment (such as a kernel)
You also do not need to use gcc for linking. You can invoke the linker directly, supply it with your own linker map, startup object code, and anything else you need.
The linux kernel build seems, for arbitrary reasons not to use -ffreestanding , it does control the linking stage though, and ensures the kernel gets linked without pulling in any userspace code.
For some rather complicated reason, I have a set of files which I would like to compile seperatly and then link, but so that the functions in one are placed inline in the second. This is because I would like them to be compiled with different flags in GCC. I know I could fix the problem by looking into how I could get around that, but I would like to know if this is possible.
EDIT 1:
If not, is it possible to compile the 'external' functions into a form of assembly that I could include in the other file. Yes crazy but also cool...
Having a quick look, this could well be an option. I guess it would be impossible to automatically compile it in, so could someone please give me a bit of information about assembly? I've only used basic ARM assembly. I've compiled to toy functions with the -S flag in GCC. How do I link registers with variables? Will they always be in the same order? The function will be highly optimised. When should I start and end the extract? Should I include .cfi_startproc at the start and .cfi_def_cfa 7, 8 at the end?#
EDIT 2:
This post details how gcc can do link-time optimisations like this with -flto. Sadly this is only available with version 4.5, which I do not have nor have the ability to install since I do not have root access of the machine I need to compile this on. Another possible solution would be to explain how I could install a different version of GCC into a folder on a unix machine.
As far as I know gcc doesn't do linktime optimizations (inlining in particular), at least with the standard ld linker (it could be that the new gold linker does it, but I really don't think so). Clang in principle should be capable of doing it, since it depends on LLVM, which supports link time optimizations (it seems that your question is gcc spacific, though).
From your question though, it seems you are looking for a a way to merge object files after compilation, not necessarily by inlining their contained functions. This can be done in multiple ways:
Archiving them into a static library with ar: e.g. ar libfoo.a obj1.o obj2.o.
Combining them together into a third relocatable object (ld's --relocatable option). gcc -Wl,--relocatable -o obj3.o obj1.o obj2.o
Putting them into a shared library (beware that this requires compiling the objects with -fPIC) e.g. gcc -shared -o libfoo.so obj1.o obj2.o
You could compile with the -c option to create a set of .o files, or even make a .so file. Then use the sequence you like in the linking phase of gcc.
When writing a basic c program.
#include <stdio.h>
main(){
printf("program");
}
Is the definition of printf in "stdio.h" or is the printf function automatically linked?
Usually, in stdio.h there's only the prototype; the definition should be inside a library that your object module is automatically linked against (the various msvcrt for VC++ on Windows, libcsomething for gcc on Linux).
By the way, it's <stdio.h>, not "stdio.h".
Usually they are automatically linked, but the compiler is allowed to implement them as it pleases (even by compiler magic).
The #include is still necessary, because it brings the standard functions into scope.
Stricto sensu, the compiler and the linker are different things (and I am not sure that the C standard speaks of compilation & linking, it more abstractly speaks of translation and implementation issues).
For instance, on Linux, you often use gcc to translate your hello.c source file, and gcc is a "driving program" which runs the compiler cc1, the assembler as, the linker ld etc.
On Linux, the <stdio.h> header is an ordinary file. Run gcc -v -Wall -H hello.c -o hello to understand what is happening. The -v option asks gcc to show you the actual programs (cc1 and others) that are used. The -Wall flag asks for all warnings (don't ignore them!). The -H flag asks the compiler to show you the header files which are included.
The header file /usr/include/stdio.h is #include-ing itself other headers. At some point, the declaration of printf is seen, and the compiler parses it and adjust its state accordingly.
Later, the gcc command would run the linker ld and ask it to link the standard C library (on my system /usr/lib/x86_64-linux-gnu/libc.so). This library contains the [object] code of printf
I am not sure to understand your question. Reading wikipedia's page about compilers, linkers, linux kernel, system calls should be useful.
You should not want gcc to link automagically your own additional libraries. That would be confusing. (but if you really wanted to do that with GCC, read about GCC specs file)