Meson: Change the C compiler based on the target - c

I am building an operating system that supports both 32 and 64 bit, using the Meson build system. I have just started adding support for 64 bit, but I came accross a problem. When I use the 64 bit C compiler, both the kernel and stage2 gets compiled using that compiler. Which is the problem, the stage2 folder, has to be compiled with the 32 bit C compiler, not the 64 bit compiler. Is there any way I can achieve this in meson? Should I switch to CMake?

Which is the problem, the stage2 folder, has to be compiled with the 32 bit C compiler, not the 64 bit compiler.
Neither build system will let you do this in a single build... both CMake and Meson have a deeply held assumption that there is one compiler per language1. If you need to use multiple compilers, you will need to split your build into multiple independent projects. How you orchestrate building them is up to you... with CMake, I would suggest using a superbuild with ExternalProject. With Meson, I'm not sure what the standard approach is.
1. Technically Meson allows you to define a host/native compiler, too (for compiling build-time tools), but it is not suitable for this case since the host might not be either target and you might well end up wanting three or more compilers in use (e.g. for ARM).
Should I switch to CMake?
Probably not, since you already have something working with Meson. But if your build rules get complex enough, you might find CMake's ability to write functions and abstractions to be a greater benefit than a liability.

Related

is it possible to compile c for many architectures on linux?

I'm developing a few programs on my pc, that runs Ubuntu 64bit.
I'd like to run these applications on another pc, that runs on 32. Is possible to compile on my machine or do I need to recompile the applications on the other?
In general you need to provide the compiler an environment similar to the target execution environment. Depending on how similar or different one environment is to another, this may be simple or complicated.
Assuming the compiler is GCC, you should only need to add -m32 to your compilation flags to make them work on a 32 bit system; assuming all other things are equal. Ensure you have the necessary 32-bit dependencies installed on your system (this means the base C library dependencies as well as a 32 bit version for each library your application links against).
Since you are only compiling for x86 on a 64 bit host, the path to this is generally simple. I would recommend however setting up a dedicated environment which you can use to compile -- typically some kind of chroot (See pbuilder, schroot, chroot, debootstrap and others).
There are compiler settings/flags that should allow you to do this on your machine; which specific ones you need would depend on the compiler you are using.

embedded c code and unit tests without cross compile

i am starting to learn unit testing. I use unity and it works well with mingw in eclipse on windows. I use different configurations for debug, release and tests. This works well with the cdt-plugin.
But my goal is to unit test my embedded code for an stm. So i use arm-gcc with the arm-gcc eclipse plugin. I planned to have a configuration for compiling the debug and release code for the target and a configuration using mingw to compile and excute the tests on the pc (just the hardware independet parts).
With the eclipse plugin, i cannot compile code that is not using the arm-gcc.
Is there a way, to have one project with configurations and support for the embedded target and the pc?
Thanks
As noted above you need a makefile pointing at two different targets with different compiler options depending on the target.
You will need to ensure portability in your code.
I have accomplished this most often using CMake and outlining different compiler paths and linker flags for unit tests versus the target. This way I can also easily link in any unit test libraries while keeping them external to my target. In the end CMake produces a Makefile but I'm not spending time worrying about make syntax which while I can read often seems like voodoo.
Doing this entirely within a single Eclipse project is possible. You need to configure your project for multiple targets with different compilers used for each and will require some coaxing to get eclipse to behave.
If you're goal is to do it entirely within Eclipse I suggest reading this as a primer.
If you want to go the other route, here is a CMake primer.
Short answer: Makefile.
But I guess NEON assemblies are a bigger issue.
Using intrinsics instead is at least open to the possibility to link to a simulator library, and there are indeed a lot of such libraries written in standard C that allow code with intrinsics to be portable.
However the poor performance of GCC Neon intrinsics forces a lot of people to sacrifice portability for performance.
If your code unfortunately contains assembly, you won't be able to even compile the code before translating assemblies back to standard C.

Bootstrapping a cross-platform compiler

Suppose you are designing, and writing a compiler for, a new language called Foo, among whose virtues is intended to be that it's particularly good for implementing compilers. A classic approach is to write the first version of the compiler in C, and use that to write the second version in Foo, after which it becomes self-compiling.
This does mean you have to be careful to keep backup copies of the binary (as opposed to most programs where you only have to keep backup copies of the source); once the language has evolved away from the first version, if you lost all copies of the binary, you would have nothing capable of compiling the current version. So be it.
But suppose it is intended to support both Linux and Windows. As long as it is in fact running on both platforms, it can compile itself on each platform, no problem. Supposing however you lost the binary on one platform (or had reason to suspect it had been compromised by an attacker); now there is a problem. And having to safeguard the binary for every supported platform is at least one more failure point than I'm comfortable with.
One solution would be to make it a cross-compiler, such that the binary on either platform can target both platforms.
This is not quite as easy as it sounds - while there is no problem selecting the binary output format, each platform provides the system API in the form of C header files, which normally only exist on their native platform, e.g. there is no guarantee code compiled against the Windows stdio.h will work on Linux even if compiled into Linux binary format.
Perhaps that problem could be solved by downloading the Linux header files onto a Windows box and using the Windows binary to cross-compile a Linux binary.
Are there any caveats with that solution I'm missing?
Another solution might be to maintain a separate minimum bootstrap compiler in Python, that compiles Foo into portable C, accepting only that subset of the language needed by the main Foo compiler and performing minimum error checking and no optimization, the intent being that the bootstrap compiler will thus remain simple enough that maintaining it across subsequent language versions wouldn't cost very much.
Again, are there any caveats with that solution I'm missing?
What methods have people used to solve this problem in the past?
This is a problem for C compilers themselves. It's typically solved by the use of a cross-compiler, exactly as you suggest.
The process of cross-compiling a compiler is no more difficult than cross-compiling any other project: that is to say, it's trickier than you'd like, but by no means impossible.
Of course, you first need the cross-compiler itself. This probably means some major surgery to your build-configuration system, and you'll need some kind of "sysroot" taken from the target (header, libraries, anything else you'll need to reference in a build).
So, in the end it depends on how your compiler is structured. Either it's easier to re-bootstrap using historical sources, repeating each phase of language compatibility you went through in the first place (you did use source revision control, right?), or it's easier to implement a cross-compiler configuration. I can't tell you which from here.
For many years, the GCC compiler was always written only in standard-compliant C code for exactly this reason: they wanted to be able to bring it up on any OS, given only the native C compiler for that system. Only in 2012 was it decided that C++ is now sufficiently widespread that the compiler itself can be written in it. Even then, they're only permitting themselves a subset of the language. In future, if anybody wants to port GCC to a platform that does not already have C++, they will need to either use a cross-compiler, or first port GCC 4.7 (that last major C-only version) and then move to the latest.
Additionally, the GCC build process does not "trust" the compiler it was built with. When you type "make", it first builds a reduced version of itself, it then uses that the build a full version. Finally, it uses the full version to rebuild another full version, and compares the two binaries. If the two do not match it knows that the original compiler was buggy and introduced some bad code, and the build has failed.

GCC Error while compiling for ARM

I am getting the following error while trying to compile some code for an ARM Cortex-M4
using
gcc -mcpu=cortex-m4 arm.c
`-mcpu=' is deprecated. Use `-mtune=' or '-march=' instead.
arm.c:1: error: bad value (cortex-m4) for -mtune= switch
I was following GCC 4.7.1 ARM options. Not sure whether I am missing some critical option. Any kickstart for using GCC for ARM will also be really helpful.
As starblue implied in a comment, that error is because you're using a native compiler built for compiling for x86 CPUs, rather than a cross-compiler for compiling to ARM.
GCC only supports a single general architecture type in any given compiler binary -- so, although the same copy of GCC can compile for both 32-bit and 64-bit x86 machines, you can't compile to both x86 and ARM with the same copy of GCC -- you need an ARM-specific GCC.
(As auselen suggests, getting a pre-built one will save you quite a lot of work, even if you're only using it as a starting point to get things set up. You need to have GCC, binutils, and a C library as a minimum, and those are all separate open-source projects that the pre-built versions have already done the work of combining. I'll recommend Sourcery CodeBench Lite since that's the one my company makes and I do think it's a fairly good one.)
As the error message says -mcpu is deprecated, and you should use the other options stated. However "deprectated" simply means that its use may not continue to be supported; it will still work.
ARM Cortex-M4 is ARM Architecture V7E-M, so you should use -march=armv7-m (the documentation does not specifically list armv7e-m, but that may have been added since the documentation was last updated. The E is essentially the difference between M3 and M4 - the DSP instructions, so the compiler will not generate code that takes advantage of these instructions. Using ARM's Cortex-M DSP library is probably the best way to use these instructions to benefit your application. If your part has an FPU, then other options will be needed enable code generation for that.
Like others already pointed out, you are using a compiler for your host machine, and you need a compiler for generating code for your target processor instead (a cross compiler). Like #Brooks suggested, you can use a pre-built toolchain, but if you want to roll out your own cross-compiler, libc and binutils, there is a nice tool called Crosstool-NG. It greatly simplifies the process of building a cross-compiler optimized to generate code for a specific processor, so you're not stuck with a generic prebuilt toolchain, which usually builds code for a family of compatible processors (e.g. you could tune the toolchain for generating ASM for your specific target, or floating point code for a hardware FPU which is specific to your processor, instead of using only software floating point routines, which are default to most pre-built toolchains).

Optimising cross platform build system

I'm looking for a cross platform build system for C which helps to find good compiler flags on a specific machine. It would need some notions of testing for correctness, benchmarking for performance and multiple versioning of the target, and perhaps even recognising the machine it is running on. For example, in a typical build I'd want to compare 64 bit versus 32 bit executables, with and without openmp, fast-math, with different optimisation levels, and builds by entirely different compilers. The atlas-blas libraries are an impressive example here but are a bit of a pain on windows due the shell scripting. Is this something that can be hacked onto systems like Scons or Waf? Any other suggestions?
Other than the one I'm thinking about writing when I'm done procrastinating, Boost Jam (bjam) would probably match your description closest.
There is also CMake, but I think it would require a scripting layer to automate multi-target building and testing.

Resources