GCC m68k pc-relative - c

I was using a Microtek toolchain to generate an executable binary with relocatable code (pc-relative) and data from a fixed address (Absolute data). Today, this toolchain does not work on Windows 7 64 bits. The idea is to replace Microtek toolchain for 68000 with the GNU toolchain (GCC 4.8.0).
But I can not find the same options on the gcc compiler:
Microtec compiler "MCC68K" with:
"-Mcp": Directs the compiler to use PC-relative addressing for all code references.
"-Mda": Directs the compiler to use absolute addressing for all data references.
Gcc (m68k-elf-gcc) with:
-mpcrel
Unable to build with gcc relocatable code with no relocatable data as the Microteck compiler. With "-mpcrel", all is relocatable (code and data).
do you have an idea?
Sorry for my bad english.
Thanks.

As far as I know, there is no way to achieve the same result with the GNU m68k toolchain.
-mpcrel will generate fully position-independent code with
pc-relative adressing for code as well as for data, resulting in a
limited program/data size (pc-relative offsets cannot exceed 16 bits).
-fpic and -fPIC will generate position independent code with
relocatable binaries but will require a special loader that does the in-place relocation

From gcc docs
-fpic Generate position-independent code (PIC) suitable for use in a shared library,...
-fPIC If supported for the target machine, emit position-independent code, suitable for dynamic linking and avoiding any limit on the size
of the global offset table.
Also try to search

Related

why my executable is bigger after linking

I have a small C program which I need to run on different chips.
The executable should be smaller than 32kb.
For this I have several toolchains with different compilers for arm, mips etc.
The program consists of several files which each is compiled to an object file and then linked together to an executable.
When I use the system gcc (x86) my executable is 15kb big.
With the arm toolchain the executable is 65kb big.
With another toolchain it is 47kb.
For example for arm all objects which are included in the executable are 14kb big together.
The objects are compiled with the following options:
-march=armv7-m -mtune=cortex-m3 -mthumb -msoft-float -Os
For linking the following options are used:
-s -specs=nosys.specs -march-armv7-m
The nosys.specs library is 274 bytes big.
Why is my executable still so much bigger (65kb) when my code is only 14kb and the library 274 bytes?
Update:
After suggestions from the answer I removed all malloc and printf commands from my code and removed the unused includes. Also I added the compile flags -ffunction-sections -fdata-sections and linking flag --gc-sections , but the executable is still too big.
For experimenting I created a dummy program:
int main()
{
return 1;
}
When I compile the program with different compilers I get very different executable sizes:
8.3 KB : gcc -Os
22 KB : r2-gcc -Os
40 KB : arm-gcc --specs=nosys.specs -Os
1.1 KB : avr-gcc -Os
So why is my arm-gcc executable so much bigger?
The avr-gcc executable does static linking as well, I guess.
Your x86 executable is probably being dynamically linked, so any standard library functions you use -- malloc, printf, string and math functions, etc -- are not included in the binary.
The ARM executable is being statically linked, so those functions must be included in your binary. This is why it's larger. To make it smaller, you may want to consider compiling with -ffunction-sections -fdata-sections, then linking with --gc-sections to discard any unused functions or data from your binary.
(The "nosys.specs library" is not a library. It's a configuration file. The real library files are elsewhere.)
The embedded software porting depends on target hardware and software platform.
The hardware platforms are splitted into mcu, cpus which could run linux os.
The software platform includes compiler toolchain and libraries.
It's meaningless to compare the program image size for mcu and x86 hardware platform.
But it's worth to compare the program image size on the same type of CPU using different toolchains.

Code::Blocks + MinGW: minimize the size of a static library

I've tried passing -ffunction-sections -fdata-sections for the compiler, but that doesn't seem to have the desired effect. As far as I understand, I also have to pass -Wl,--gc-sections to the linker, but I'm not linking the files at this point. I just want to have a .a library file as small as possible, with minimal redundant code/data.
The compiler performs optimization based on the knowledge it has of the program. Optimization levels -O2 and above, in particular, enable unit-at-a-time mode, which allows the compiler to consider information gained from later functions in the file when compiling a function. Compiling multiple files at once to a single output file in unit-at-a-time mode allows the compiler to use information gained from all of the files when compiling each of them.
Not all optimizations are controlled directly by a flag.
-ffunction-sections
-fdata-sections
Place each function or data item into its own section in the output file if the target supports arbitrary sections. The name of the function or the name of the data item determines the section's name in the output file.
Use these options on systems where the linker can perform optimizations to improve locality of reference in the instruction space. Most systems using the ELF object format and SPARC processors running Solaris 2 have linkers with such optimizations. AIX may have these optimizations in the future.
Only use these options when there are significant benefits from doing so. When you specify these options, the assembler and linker will create larger object and executable files and will also be slower. You will not be able to use gprof on all systems if you specify this option and you may have problems with debugging if you specify both this option and -g.
U can use the following link for more details..:-)
http://gcc.gnu.org/onlinedocs/gcc-4.0.4/gcc/Optimize-Options.html
The following will reduce the size of your compiled objects (and thus the static library)
-Os -g0 -fvisibility=hidden -fomit-frame-pointer -mno-accumulate-outgoing-args -finline-small-functions -fno-unwind-tables -fno-asynchronous-unwind-tables -s
The following will increase the size of objects (though they may make the binary smaller)
-ffunction-sections -fdata-sections -flto -g -g1 -g2 -g3
The only way to really make a static library smaller is to remove the unneeded code by compiling the static library with -ffunction-sections -fdata-sections and linking the final product with -Wl,--gc-sections,--print-gc-sections to find out what parts are cruft. Then go back and remove those functions/variables (this also works for making smaller shared libraries - see http://libraryopt.sf.net)
Compare the size of the library if you -g in the compiler flags, vs not having it. I can easily imagine you double the size of a static library if it includes debug information in. If that was what you saw, chances are you already stripped the library of debug symbols, and hence cannot significantly shrink the library file more. This could explain your memory of cutting the size in half at some point.

GNU Triplet, GCC and Linux kernel compiling

My native gcc says, that its triplet is the following.
> gcc -dumpmachine
x86_64-suse-linux
Where cpu-vendor-os are correspondingly x86_64, suse, linux. The latter means that glibs is in use(?). When I am doing cross-compiling busybux-based system the compiler triplet is something like avr32-linux-uclibc, where os is 'linux-uclibc', meaning that uclibc is used.
The difference between 'linux-glibc' and 'linux-uclibc' is (AFAIU) in collect2 behavior and libgcc.a content. Either glibc or uclibs are silently linked to the target binary.
The questions is that how is the linux kernel been compiled by the same compilers? As soon as the kernel runs on bare-metal it must not been linked with any kind of user-space libc, and should use appropriate libgcc.a
gcc has all kind of options to control how it works.
Here's a few relevant ones:
-nostdlib to omit linking to the standard libraries and startup code
-nostdinc to omit searching for header files in the standard locations.
-ffreestanding to compile for a freestanding environment (such as a kernel)
You also do not need to use gcc for linking. You can invoke the linker directly, supply it with your own linker map, startup object code, and anything else you need.
The linux kernel build seems, for arbitrary reasons not to use -ffreestanding , it does control the linking stage though, and ensures the kernel gets linked without pulling in any userspace code.

C: Compiling pre-compiled code as inline

For some rather complicated reason, I have a set of files which I would like to compile seperatly and then link, but so that the functions in one are placed inline in the second. This is because I would like them to be compiled with different flags in GCC. I know I could fix the problem by looking into how I could get around that, but I would like to know if this is possible.
EDIT 1:
If not, is it possible to compile the 'external' functions into a form of assembly that I could include in the other file. Yes crazy but also cool...
Having a quick look, this could well be an option. I guess it would be impossible to automatically compile it in, so could someone please give me a bit of information about assembly? I've only used basic ARM assembly. I've compiled to toy functions with the -S flag in GCC. How do I link registers with variables? Will they always be in the same order? The function will be highly optimised. When should I start and end the extract? Should I include .cfi_startproc at the start and .cfi_def_cfa 7, 8 at the end?#
EDIT 2:
This post details how gcc can do link-time optimisations like this with -flto. Sadly this is only available with version 4.5, which I do not have nor have the ability to install since I do not have root access of the machine I need to compile this on. Another possible solution would be to explain how I could install a different version of GCC into a folder on a unix machine.
As far as I know gcc doesn't do linktime optimizations (inlining in particular), at least with the standard ld linker (it could be that the new gold linker does it, but I really don't think so). Clang in principle should be capable of doing it, since it depends on LLVM, which supports link time optimizations (it seems that your question is gcc spacific, though).
From your question though, it seems you are looking for a a way to merge object files after compilation, not necessarily by inlining their contained functions. This can be done in multiple ways:
Archiving them into a static library with ar: e.g. ar libfoo.a obj1.o obj2.o.
Combining them together into a third relocatable object (ld's --relocatable option). gcc -Wl,--relocatable -o obj3.o obj1.o obj2.o
Putting them into a shared library (beware that this requires compiling the objects with -fPIC) e.g. gcc -shared -o libfoo.so obj1.o obj2.o
You could compile with the -c option to create a set of .o files, or even make a .so file. Then use the sequence you like in the linking phase of gcc.

Can gcc generate different size object code?

Which option should be enabled in gcc to generate 16-bit or 32-bit or 64-bit object code ? Are there separate options for generating each of the above object code type ?
The bitness of the generated object code is determined by the target architecture selected when gcc was built. If you want to build for a different platform, you should build a cross compiler for your desired target platform.
Note, however, that GCC does not support 16-bit x86, and that if both 32-bit and 64-bit x86 compilers are installed, as an exception, you can use -m32 or -m64 to select the desired target format.
To force gcc to generate 32-bit code you would give it the -m32 flag. To force it to generate 64-bit code you would give it the -m64 flag. I don't know of any option for 16-bit.

Resources