i am starting to learn unit testing. I use unity and it works well with mingw in eclipse on windows. I use different configurations for debug, release and tests. This works well with the cdt-plugin.
But my goal is to unit test my embedded code for an stm. So i use arm-gcc with the arm-gcc eclipse plugin. I planned to have a configuration for compiling the debug and release code for the target and a configuration using mingw to compile and excute the tests on the pc (just the hardware independet parts).
With the eclipse plugin, i cannot compile code that is not using the arm-gcc.
Is there a way, to have one project with configurations and support for the embedded target and the pc?
Thanks
As noted above you need a makefile pointing at two different targets with different compiler options depending on the target.
You will need to ensure portability in your code.
I have accomplished this most often using CMake and outlining different compiler paths and linker flags for unit tests versus the target. This way I can also easily link in any unit test libraries while keeping them external to my target. In the end CMake produces a Makefile but I'm not spending time worrying about make syntax which while I can read often seems like voodoo.
Doing this entirely within a single Eclipse project is possible. You need to configure your project for multiple targets with different compilers used for each and will require some coaxing to get eclipse to behave.
If you're goal is to do it entirely within Eclipse I suggest reading this as a primer.
If you want to go the other route, here is a CMake primer.
Short answer: Makefile.
But I guess NEON assemblies are a bigger issue.
Using intrinsics instead is at least open to the possibility to link to a simulator library, and there are indeed a lot of such libraries written in standard C that allow code with intrinsics to be portable.
However the poor performance of GCC Neon intrinsics forces a lot of people to sacrifice portability for performance.
If your code unfortunately contains assembly, you won't be able to even compile the code before translating assemblies back to standard C.
Related
It seems like CMake is fairly entrenched in its view that there should be one, and only one, CMAKE_CXX_COMPILER for all C++ source files. I can't find a way to override this on a per-target basis. This makes a mix of host-and-cross compiling in a single CMakeLists.txt very difficult with the built-in CMake facilities.
So, my question is: what's the best way to use multiple compilers for the same language (i.e. C++)?
It's impossible to do this with CMake.
CMake only keeps one set of compiler properties which is shared by all targets in a CMakeLists.txt file. If you want to use two compilers, you need to run CMake twice. This is even true for e.g. building 32bit and 64bit binaries from the same compiler toolchain.
The quick-and-dirty way around this is using custom commands. But then you end up with what are basically glorified shell-scripts, which is probably not what you want.
The clean solution is: Don't put them in the same CMakeLists.txt! You can't link between different architectures anyway, so there is no need for them to be in the same file. You may reduce redundancies by refactoring common parts of the CMake scripts into separate files and include() them.
The main disadvantage here is that you lose the ability to build with a single command, but you can solve that by writing a wrapper in your favorite scripting language that takes care of calling the different CMake-makefiles.
You might want to look at ExternalProject:
http://www.kitware.com/media/html/BuildingExternalProjectsWithCMake2.8.html
Not impossible as the top answer suggests. I have the same problem as OP. I have some sources for cross compiling for a raspberry pi pico, and then some unit tests that I am running on my host system.
To make this work, I'm using the very shameful "set" to override the compiler in the CMakeLists.txt for my test folder. Works great.
if(DEFINED ENV{HOST_CXX_COMPILER})
set(CMAKE_CXX_COMPILER $ENV{HOST_CXX_COMPILER})
else()
set(CMAKE_CXX_COMPILER "g++")
endif()
set(CMAKE_CXX_FLAGS "")
The cmake devs/community seems very against using set to change the compiler since for some reason. They assume that you need to use one compiler for the entire project which is an incorrect assumption for embedded systems projects.
My solution above works, and fits the philosophy I think. Users can still change their chosen compiler via environment variables, if it's not set then I do assume g++. set only changes variables for the current scope, so this doesn't affect the rest of the project.
To extend #Bill Hoffman's answer:
Build your project as a super-build, by using some kind of template like the one here https://github.com/Sarcasm/cmake-superbuild
which will configure both the dependencies and your project as an ExternalProject (standalone cmake configure/build/install environment).
I am working on a complex project written in C/Asm for an embedded target running on an Analog Devices DSP. The toolchain is close to gcc, but they are plenty of differences. Moreover, I am using a lot of autogeneration scripts using Jinja2 to generate header files from data extracted from a database. I also have plenty of compiler flags.
I currently wrote a Makefile from scratch. It is about 400 lines long and works pretty well. I automatically discover the sources across the directories and hold all the dependencies i.e.
a.tmpl --->jinja-->a.c--->a.o
^
a.yaml ------'
I would like to know if tools such as Cmake or Automake can be useful in my case. In other words, can I use these tools to simply the readability of Makefile?
CMake works perfectly with generated sources. Just add appropriate custom command:
add_custom_command(OUTPUT a.c
COMMAND jinja <args>
DEPENDS a.yaml)
add_executable(a a.c)
Where is the code in the GCC source code that actually constructs the assembly for the different architectures?
Wondering how many different assembly languages it compiles to, and how it actually does this (by taking a look at the source code).
Is it in the gcc repo somewhere, or in another repo? I have started to dig around but haven't found anything.
https://github.com/gcc-mirror/gcc
For example, here is some of the assembly generating code in V8:
https://github.com/v8/v8-git-mirror/tree/master/src/x64
Is there anything equivalent for GCC?
I am wondering because it's a mystery how GCC does this, and it would be a great way to learn how compilers are actually implemented down to the assembly level.
The .md (machine description) files of GCC source contain stuff to generate assembly. GCC contains several specialized C/C++ code generators (and some of them translates the .md files into code emitting assembly).
GCC is a very complex program. The documentation of GCC MELT (an obsolete project) contains several interesting links and slides, notably refering to the Indian GCC Resource Center
Most of the optimizations in GCC happens in the middle-end (which is mostly independent of source language or target system), notably with many passes working on the Gimple representations.
The GCC repo is an SVN repository.
See also this answer, notably the pictures inside it.
The actual source code for GCC is most accessible from here:
https://gcc.gnu.org/svn.html
The software is accessible via SVN (subversion), a source code control system. This would be installed on many versions of Linux/UNIX, but if not on your platform, you can install the svn kit and then fetch the source using the following command:
svn checkout svn://gcc.gnu.org/svn/gcc/trunk SomeLocalDir
GCC is complex and would take significant experience to understand the nature of how the application actually compiles to different architectures.
In a nutshell, GCC has three major components - front-end, middle and back-end processing. The front-end processor has the component of the language parsing to understand the syntax of languages (like C, C++, Objective-C, etc). The front-end deconstructs the code to a portable construct which is then passed to the back-end for compilation to the target environment.
The middle part performs code analysis and optimisation, attempting to prioritise the code to generate the best possible output at the end of the full process. Technically, optimisation can occur at any part of the process as patterns are discovered during analysis.
The back-end processor compiles the code to a tree-style output format (not actually final executable code). Based on what the expected output is designed to be, the "pseudo-code" is optimised for using registers, bit-sizes, endian-ness, and so on. The final code is then generated during the assembly phase, which converts the back-end code into machine executable instructions.
It's important to note that the compiler has many options to deal with output formats so you can create output to many classes of architecture, usually out of the box. For cross-compiling and target compiler options, try checking out this link:
https://gcc.gnu.org/install/configure.html
I'm either failing hard at Google today, or this is something which is non-trivial.
I have an application that I am working on for a Windows system, cross-compiling from Linux because (a) I need C99 and Microsoft's free tools for the target system does not support it and (b) I've been using UNIX for nearly 30 years anyway, and that's my "home". Changing to an MSVC stack with "native" building is not an option for me, nor is running the GNU build system on Windows (it takes forever).
The problem is that I need to have a single tool built for the system being compiled on and not the target; I need to then run that executable which will generate several .c source files and .h headers which then enable the project to compile. I am using the so-called "GNU Build System" (that is, the autotools, including autoconf/automake/libtool).
Any recipe I write will, regardless if I configure for i686-w64-mingw32 or x86_64-w64-mingw32, compile all DLLs and EXEs for the Win32/Win64 platform.
There is a way that I can force the issue by hand-crafting standard Makefile receipes, but I was trying to find an "autotools native" way of compiling and running build-time executables that are not e.g., unit tests, but source code generators.
Any ideas, short of hand-crafting Makefile recipes?
ETA: Additionally, the project is cross-platform: it does make sense to compile this one natively for Linux as well, so any solution needs to work just as well when not cross-compiling.
I have an application written in C for a Xilinx Microblaze core. However, the performance isn't quite what I want so I was considering rewriting some of the core functions in assembly. I'm having trouble figuring out how to get Xilinx Platform Studio to compile both into a single ELF file though.
How can I do it?
As suggested by Yann, you can use inline assembly. Here is how:
AR# 18561. 11.1 EDK - How do I include inline assembly within my C source files?
Though, try to profile your code to determine where your performance bottleneck is. Xilinx's SDK allows for intrusive profiling. You could also use GPIOs and an oscilloscope (or logic analyser with a fast triggering clock) to profile your functions/code sections yourself.
Check if the compiler implements inline assembly. Try the asm() "function". Check that it supports variable referencing. If your compiler is GCC based, this is easy.
You can always write raw assembler, assemble it, and link it into your application. You need to understand the ABI of your compiler to make compatible functions.
Did you profile where exactly the poor performance comes from? From my experience, core functions are quite fast, so your code is probably the source of the problem. Try compiling with optimization (-O3) or changing the cache size (if you use a cache).
I don't know which Microblaze function you want to rewrite, but you can always go to Xilinx install directory (for example, C:\Xilinx\13.4\ISE_DS\EDK\sw\lib\bsp\standalone_v3_00_a\src\microblaze) to modify functions or even include your own assembly language file in the specific software library.