Does Xamarin.iOS do this when building for AdHoc/AppStore? I am binding to a lot of static/fat libraries that have architectures that support the simulator. Are the unused architectures stripped for AdHoc/AppStore?
Short Answer: Yes
Long Answer:
While Xamarin's linker is a managed linker (and only works on managed code) the extra architectures are removed from the final executable binary. That's true for i386 but it's also true for remove ARMv6 (from libraries) for an ARMv7-only executable.
Also since you're including a lot of bindings then you might want to:
Enable the managed linker on the binding .dll. That will remove unused code from the .dll and it will also optimize the bindings. You can do so easily by adding the [LinkerSafe] attribute in your binding projects; and
Enable the new static registrar and include SmartLink=true in your [LinkWith] attribute. That enable the native linker to do a better job to remove native code (which is made even easier if unused code was removed from the binding .dll).
Both options can reduce your final application size. You can watch my Evolve 2013 talk on Advanced iOS Build mechanics for more details on them.
Related
I've been working on using the Meson build system for an embedded project. Since I'm working on an embedded platform, I've written a custom linker script and also an invocation for the linker. I didn't have any problems until I tried to link in newlib to my project, when I started to have link issues. Just before I got it working, the last error was undefined reference to main which I knew was clearly in the project.
Out of happenstance, I tried adding -mcpu=cortex-m4 to my linker invocation (I am using gcc to link, I am told this is quite typical instead of directly calling ld). It worked! Now, my only question is "why"?
Perhaps I am missing something about how the linking process actually works, but considering I am just producing an ELF file, I didn't think it would be important to specify the CPU architecture to the linker. Is this a newlib thing, or has gcc just been doing magic behind the scenes for me that I haven't seen before?
For reference, here's my project (it's not complete)
In general, you should always link via the compiler driver (link forms of the gcc command), not via direct invocation of ld. If you're developing for bare metal on a particular exact target, it's possible to determine the set of linker arguments you need and use ld directly, but there's a lot that the compiler driver takes care of for you, and it's usually better to let it. (If you don't have a single fixed target, there are unlimited combinations of possibilities and no way you can reproduce all present and future ones someone may care about.)
You can still pass whatever options you like to the linker, e.g. custom linker scripts, via -Wl,... option forms.
As for why the specific target architecture ISA level could matter to linking, linking is not a dumb process of just sticking together binary chunks. Linking can involve patching up (relocations) or even generating (thunks for distant jump targets, etc.) code, in which case the linker may need to care what particular ISA level/variant it's targeting.
Such linker options ensure that the appropriate standard library and start-up code is linked when these are defaulted rather then explicitly specified or overridden.
The one ARM toolchain supports a variety of ARM architecture variants and options; they may be big or little-endian, have various instruction sets - ARM, Thumb. Thumb-2, ARM64 etc, and various extensions such a SIMD or DSP units. The linker requires the architecture information to select the correct library to link for both performance and binary compatibility.
It seems like CMake is fairly entrenched in its view that there should be one, and only one, CMAKE_CXX_COMPILER for all C++ source files. I can't find a way to override this on a per-target basis. This makes a mix of host-and-cross compiling in a single CMakeLists.txt very difficult with the built-in CMake facilities.
So, my question is: what's the best way to use multiple compilers for the same language (i.e. C++)?
It's impossible to do this with CMake.
CMake only keeps one set of compiler properties which is shared by all targets in a CMakeLists.txt file. If you want to use two compilers, you need to run CMake twice. This is even true for e.g. building 32bit and 64bit binaries from the same compiler toolchain.
The quick-and-dirty way around this is using custom commands. But then you end up with what are basically glorified shell-scripts, which is probably not what you want.
The clean solution is: Don't put them in the same CMakeLists.txt! You can't link between different architectures anyway, so there is no need for them to be in the same file. You may reduce redundancies by refactoring common parts of the CMake scripts into separate files and include() them.
The main disadvantage here is that you lose the ability to build with a single command, but you can solve that by writing a wrapper in your favorite scripting language that takes care of calling the different CMake-makefiles.
You might want to look at ExternalProject:
http://www.kitware.com/media/html/BuildingExternalProjectsWithCMake2.8.html
Not impossible as the top answer suggests. I have the same problem as OP. I have some sources for cross compiling for a raspberry pi pico, and then some unit tests that I am running on my host system.
To make this work, I'm using the very shameful "set" to override the compiler in the CMakeLists.txt for my test folder. Works great.
if(DEFINED ENV{HOST_CXX_COMPILER})
set(CMAKE_CXX_COMPILER $ENV{HOST_CXX_COMPILER})
else()
set(CMAKE_CXX_COMPILER "g++")
endif()
set(CMAKE_CXX_FLAGS "")
The cmake devs/community seems very against using set to change the compiler since for some reason. They assume that you need to use one compiler for the entire project which is an incorrect assumption for embedded systems projects.
My solution above works, and fits the philosophy I think. Users can still change their chosen compiler via environment variables, if it's not set then I do assume g++. set only changes variables for the current scope, so this doesn't affect the rest of the project.
To extend #Bill Hoffman's answer:
Build your project as a super-build, by using some kind of template like the one here https://github.com/Sarcasm/cmake-superbuild
which will configure both the dependencies and your project as an ExternalProject (standalone cmake configure/build/install environment).
Configuration
I use autotools (autoreconf -iv and ./configure) to generate correct makefiles. On my development machine (Fedora) everything works correctly. For make check I use the library libcheck and from autotools I use Libtools. On Fedora the library for check is dynamic: libcheck.so.0.0.0 or something such. It works.
The Issue
When I push the commits to my repo on github and do a pull request, the result is tested on Travis CI which uses Ubuntu as a platform. Now on Ubuntu the libcheck is a static library: libcheck.a and a libcheck_pic.a.
When Travis does a make check I get the following error message:
/usr/bin/ld: /usr/bin/../lib/gcc/x86_64-linux-gnu/4.9/../..
/../libcheck.a(check.o): relocation R_X86_64_32 against.rodata.str1.1' can
not be used when making a shared object; recompile with -fPIC`
/usr/bin/../lib/gcc/x86_64-linux-gnu/4.9/../../../libcheck.a: could not read
symbols: Bad value
Which means I have to somehow let configure determine what library I need. I suspect I need libcheck_pic.a for Ubuntu and the regular libcheck.so for Fedora.
The question
Does anyone know how to integrate this into configure.ac and test/Makefile.am using libtool? I would prefer to stay in line with autotools way of life.
I couldn't find usable information using google, but there is a lot of questions out there about what the difference is between static and dynamic (which is not what I need).
I would much appreciate if anyone could point me in the right direction, or maybe even solved this already?
I suspect you're right that the library you want to use on the CI system is libcheck_pic.a, for its name suggests that the routines within are compiled as position-independent code, exactly as the error message you receive suggests that you do.
One way to approach the problem, then, would be to use libcheck_pic if it is available, and to fall back to plain libcheck otherwise. That's not too hard to configure your Autotools-based build system to do. You then record the appropriate library name in an output variable, and use that in your (auto)make files.
Autoconf's SEARCH_LIBS macro specifically serves this kind of prioritized library search requirement, but it has the side effect, likely unwanted in this case, of modifying the LIBS variable. You can nevertheless make it work. Something like this might do it, for example:
LIBS_save=$LIBS
AC_SEARCH_LIBS([ck_assert], [check_pic check], [
# Optional: add a test to verify that the chosen lib really provides PIC code.
# Set LIBCHECK to the initial substring of $LIBS up to but excluding the first space.
LIBCHECK=${LIBS%% *}
], [
# or maybe make it a warning, and disable your test suite when libcheck
# is not available.
AC_MSG_ERROR([A PIC version of libcheck is required])
])
AC_OUTPUT([LIBCHECK])
LIBS=$LIBS_save
I presume you know what to do with $(LIBCHECK) on the Make side.
As written, that has the limitation that if there is no PIC version of libcheck available then you won't find out until make or maybe make check. That's undesirable, and you could add Autoconf code to detect that situation if it is undesirable enough.
As an altogether different approach, you could consider building your tests statically (add -static to the appropriate *_LDFLAGS variable). Of course, this has the opposite problem: if a static version of the library is not available, then the build or tests fail. Also, it requires building a static version of your own code if you're not doing so already, and furthermore, it is the static version that will be tested.
For greatest flexibility, you could consider combining those two approaches. You might set up a fallback from one to the other, or you might set up separate targets for testing static code and PIC code, and exercise whichever of them (possibly both) is supported by the libraries available on the build system.
i am starting to learn unit testing. I use unity and it works well with mingw in eclipse on windows. I use different configurations for debug, release and tests. This works well with the cdt-plugin.
But my goal is to unit test my embedded code for an stm. So i use arm-gcc with the arm-gcc eclipse plugin. I planned to have a configuration for compiling the debug and release code for the target and a configuration using mingw to compile and excute the tests on the pc (just the hardware independet parts).
With the eclipse plugin, i cannot compile code that is not using the arm-gcc.
Is there a way, to have one project with configurations and support for the embedded target and the pc?
Thanks
As noted above you need a makefile pointing at two different targets with different compiler options depending on the target.
You will need to ensure portability in your code.
I have accomplished this most often using CMake and outlining different compiler paths and linker flags for unit tests versus the target. This way I can also easily link in any unit test libraries while keeping them external to my target. In the end CMake produces a Makefile but I'm not spending time worrying about make syntax which while I can read often seems like voodoo.
Doing this entirely within a single Eclipse project is possible. You need to configure your project for multiple targets with different compilers used for each and will require some coaxing to get eclipse to behave.
If you're goal is to do it entirely within Eclipse I suggest reading this as a primer.
If you want to go the other route, here is a CMake primer.
Short answer: Makefile.
But I guess NEON assemblies are a bigger issue.
Using intrinsics instead is at least open to the possibility to link to a simulator library, and there are indeed a lot of such libraries written in standard C that allow code with intrinsics to be portable.
However the poor performance of GCC Neon intrinsics forces a lot of people to sacrifice portability for performance.
If your code unfortunately contains assembly, you won't be able to even compile the code before translating assemblies back to standard C.
I would like to have control over the type of the libraries that get found/linked with my binaries in CMake. The final goal is, to generate binaries "as static as possible" that is to link statically against every library that does have a static version available. This is important as would enable portability of binaries across different systems during testing.
ATM this seems to be quite difficult to achieve as the FindXXX.cmake packages, or more precisely the find_library command always picks up the dynamic libraries whenever both static and dynamic are available.
Tips on how to implement this functionality - preferably in an elegant way - would be very welcome!
I did some investigation and although I could not find a satisfying solution to the problem, I did find a half-solution.
The problem of static builds boils down to 3 things:
Building and linking the project's internal libraries.
Pretty simple, one just has to flip the BUILD_SHARED_LIBS switch OFF.
Finding static versions of external libraries.
The only way seems to be setting CMAKE_FIND_LIBRARY_SUFFIXES to contain the desired file suffix(es) (it's a priority list).
This solution is quite a "dirty" one and very much against CMake's cross-platform aspirations. IMHO this should be handled behind the scenes by CMake, but as far as I understood, because of the ".lib" confusion on Windows, it seems that the CMake developers prefer the current implementation.
Linking statically against system libraries.
CMake provides an option LINK_SEARCH_END_STATIC which based on the documentation: "End a link line such that static system libraries are used."
One would think, this is it, the problem is solved. However, it seems that the current implementation is not up to the task. If the option is turned on, CMake generates a implicit linker call with an argument list that ends with the options passed to the linker, including -Wl,-Bstatic. However, this is not enough. Only instructing the linker to link statically results in an error, in my case: /usr/bin/ld: cannot find -lgcc_s. What is missing is telling gcc as well that we need static linking through the -static argument which is not generated to the linker call by CMake. I think this is a bug, but I haven't managed to get a confirmation from the developers yet.
Finally, I think all this could and should be done by CMake behind the scenes, after all it's not so complicated, except that it's impossible on Windows - if that count as complicated...
A well made FindXXX.cmake file will include something for this. If you look in FindBoost.cmake, you can set the Boost_USE_STATIC_LIBS variable to control whether or not it finds static or shared libraries. Unfortunately, a majority of packages do not implement this.
If a module uses the find_library command (most do), then you can change CMake's behavior through CMAKE_FIND_LIBRARY_SUFFIXES variable. Here's the relevant CMake code from FindBoost.cmake to use this:
IF(WIN32)
SET(CMAKE_FIND_LIBRARY_SUFFIXES .lib .a ${CMAKE_FIND_LIBRARY_SUFFIXES})
ELSE(WIN32)
SET(CMAKE_FIND_LIBRARY_SUFFIXES .a ${CMAKE_FIND_LIBRARY_SUFFIXES})
ENDIF(WIN32)
You can either put this before calling find_package, or, better, you can modify the .cmake files themselves and contribute back to the community.
For the .cmake files I use in my project, I keep all of them in their own folder within source control. I did this because I found that having the correct .cmake file for some libraries was inconsistent and keeping my own copy allowed me to make modifications and ensure that everyone who checked out the code would have the same build system files.