Problem with Cross platform compilling on Windows - arm

I'm working on learning how to use a cross-platform compiler on my Windows machine with C++ and having a problem. I've tried all I could without any success. Any help will be greatly appreciated.
I have installed CYGWIN64 and gcc-arm-none-eabi-10-2020-q4-major on my Windows machine.
My Code:
#include <iostream>
int main()
{
std::cout << "Hello World!\n";
}
Error Message:
arm-none-eabi-g++ helloworld.cpp -o helloworld
c:/cross gcc/gnu arm embedded toolchain/10 2020-q4-major/bin/../lib
/gcc/arm-none-eabi/10.2.1/../../../../arm-none-eabi/bin/ld.exe: c:/cross
gcc/gnu arm embedded toolchain/10 2020-q4-major/bin/../lib/gcc/arm-none-
eabi/10.2.1/../../../../arm-none-eabi/lib\libc.a(lib_a-exit.o): in
function `exit': exit.c:(.text.exit+0x2c): undefined reference to `_exit'
c:/cross gcc/gnu arm embedded toolchain/10 2020-q4-major/bin/../l
/gcc/arm-none-eabi/10.2.1/../../../../arm-none-eabi/bin/ld.exe:
C:\cygwin64\tmp\cckThWXU.o: in function `m
ain': helloworld.cpp:(.text+0x34): undefined reference to
`std::basic_ostream<char, std::char_traits<char> >&
std::operator<<<std::char_traits<char> >(std::basic_ostream<char,
std::char_traits<char> >&, char const*)'
c:/cross gcc/gnu arm embedded toolchain/10 2020-q4-major/bin/../lib
/gcc/arm-none-eabi/10.2.1/../../../../arm-none-eabi/bin/ld.exe:
helloworld.cpp:(.text+0x50): undefined reference to `std::cout'
c:/cross gcc/gnu arm embedded toolchain/10 2020-q4-major/bin/../l/gcc
/arm-none-eabi/10.2.1/../../../../arm-none-eabi/bin/ld.exe: C:\cygwin64
\tmp\cckThWXU.o: in function
`__static_initialization_and_destruction_0(int, int)': helloworld.cpp:
(.text+0x88): undefined reference to `std::ios_base::Init::Init()'
c:/cross gcc/gnu arm embedded toolchain/10 2020-q4-major/bin/../lib
/gcc/arm-none-eabi/10.2.1/../../../../arm-none-eabi/bin/ld.exe:
helloworld.cpp:(.text+0xb8): undefined reference to
`std::ios_base::Init::~Init()'
c:/cross gcc/gnu arm embedded toolchain/10 2020-q4-major/bin/../lib
/gcc/arm-none-eabi/10.2.1/../../../../arm-none-eabi/bin/ld.exe: c:/cross
gcc/gnu arm embedded toolchain/10 2020-q4-major/bin/../lib/gcc/arm-none-
eabi/10.2.1/../../../../arm-none-eabi/lib\libc.a(lib_a-abort.o): in
function `abort': abort.c:(.text.abort+0x10): undefined reference to
`_exit'
c:/cross gcc/gnu arm embedded toolchain/10 2020-q4-major/bin/../lib
/gcc/arm-none-eabi/10.2.1/../../../../arm-none-eabi/bin/ld.exe: c:/cross
gcc/gnu arm embedded toolchain/10 2020-q4-major/bin/../lib/gcc/arm-none-
eabi/10.2.1/../../../../arm-none-eabi/lib\libc.a(lib_a-signalr.o): in
function `_kill_r': signalr.c:(.text._kill_r+0x1c): undefined reference
to `_kill'
c:/cross gcc/gnu arm embedded toolchain/10 2020-q4-major/bin/../lib
/gcc/arm-none-eabi/10.2.1/../../../../arm-none-eabi/bin/ld.exe: c:/cross
gcc/gnu arm embedded toolchain/10 2020-q4-major/bin/../lib/gcc/arm-none-
eabi/10.2.1/../../../../arm-none-eabi/lib\libc.a(lib_a-signalr.o): in
function `_getpid_r': signalr.c:(.text._getpid_r+0x4): undefined
reference to `_getpid'
c:/cross gcc/gnu arm embedded toolchain/10 2020-q4-major/bin/../lib
/gcc/arm-none-eabi/10.2.1/../../../../arm-none-eabi/bin/ld.exe: c:/cross
gcc/gnu arm embedded toolchain/10 2020-q4-major/bin/../lib/gcc/arm-none-
eabi/10.2.1/../../../../arm-none-eabi/lib\libc.a(lib_a-sbrkr.o): in
function `_sbrk_r': sbrkr.c:(.text._sbrk_r+0x18): undefined reference to
`_sbrk' collect2.exe: error: ld returned 1 exit status

I don't think your problem is related to cygwin or windows. You would get the same error on Linux too. BTW if you have Windows 10 and don't mind Hyper-V then you could theoretically use Windows subsystem for Linux and use Linux toolchains on your project (even from within a window).
It's more about the embedded nature of the compilation, on x86 we assume that we have an operating system, which provided syscalls, has booted up and initialized memories and peripherals, which has a console/stdout available for applications. While things in bare-metal are very different, where you have to do a lot before your main function can execute code. You are using Arm compiler, it doesn't know what board you are using, what memory map it will have, what peripherals it will include. What cout is expected to do? Output on UART (what baud rate, and which UART, if there are multiple of them), output on LCD (what LCD, with what driver, how it's connected, SPI or other means, what are these settings for that LCD)... So that's why a lot of embedded projects do 'blinky' as their hello world as blinking some LED or toggle values on some GPIO port are in general easier to test, easier to implement and are specific, we expect some port or LED to go high and low, while with cout it could mean anything and some targets might not even have means to make cout work at all (no UART, no SPI to connect LCD, no semihosting).
With iostream and embedded devices it can mean a so many things and using it like that will never work. Vendors usually provide HAL for their devices usually bundled with their toolchain (or IDE) which has stubs to initialise the device and implement syscalls such as _open, _write, _close etc... which then calls like iostream can use to open the 'file' which in this case could be implemented to open UART and each 'printf' would use putc and that would use write, which their HAL implements.
I would recommend instead using some vendor IDE (depending what target boards you intend), I can think at least of a few of them, which bundle Arm toolchain and support Windows and Linux so your cross-platform works. I'm not going to name them as I want to stay unbiased (and I work for one vendor), but they are easy to google and doesn't matter if I name some when you probably want target something specific.
If you want make code that is transferable between targets, then I would recommend starting without that constrain and get going with one vendor and get used to what the HAL/startup code does. And maybe then I would wrappers and do some abstraction, and experiment with some other vendors to get the feel what their HAL will do.
And if having so generic code, then it will not work just like that out of the box. You would need some weakly linked implementations of function like putchar so then for each vendor they could implement that function just to call their HAL. In essence doing:
<some application> -> <your library> -> <generic HAL call> -> <vendor specific HAL call>
For more info here is a bit about weak:
How to make weak linking work with GCC?
Look for example for this cross target printf implementation, each user has to implement the _putchar as he can't know what, how and where the printf will be used.
https://github.com/mpaland/printf
Probably you shouldn't be making a multi-target application, but a generic pluggable library, so users could add it to their applications instead of using yours. As yours will not work in one shape no matter how generic you try. Each vendor has its own startup code, HAL, linker scripts, and a way to organize the project to work in their ecosystem. There is no one system that would compatible with everybody, so if you want to be portable, you have to allow users to use whatever their vendor recommends and then invoke your library from within their application.
I would strongly recommend dropping the portability, make something first work on one host, one toolchain one target, get some experience and knowledge how things work and only then try to make stuff portable and generic.

Related

Is ARM toolchain's gcc regular c or embedded c?

I'm unsure which I'm using. I look it up and some answers say that it's the the host machine that determines if c is embedded c or not, so like, pc -> c, mcu -> embedded c.
edit: I use arm-none-eabi-xxx
arm-none-eabi-gcc is a cross-compiler for bare-metal ARM. ARM is the target, the host is the machine you run the compiler on - the development host.
In most cases you would use such a compiler for developing for bare-metal (i.e. no fully featured OS) embedded systems, but equally you could be using it to develop bootstrap code for a desktop system, or developing an operating system (though less likely).
In any event, it is not the compiler that determines if the system is embedded. An embedded system is simply a system running software that is not a general-purpose computer. For example many network routers run on embedded Linux such as OpenWRT and in that case you might use arm-linux-eabi-gcc. What distinguishes it is that it is still a cross-compiler; the host on which you build the code, is not the same machine architecture or OS as that which will run it.
Neither does being a cross-compiler make it "embedded" - it is entirely possible to cross compile Linux executables on Windows or vice versa with neither target being embedded.
The point is, there is no "Embedded C" language, it is all "Regular C". What makes it embedded is the intended target. So don't get hung-up on classification of the toolchain. If you are using it to generate code for embedded systems it is embedded, simple as that. Many years ago I worked on a project that used Microsoft C V6 (normally for MS-DOS) in an embedded system with code on ROM and running VRTX RTOS. Microsoft C was by no definition an embedded systems compiler; the "magic" was done by replacing Microsoft's linker to "retarget" it to produce ROM images rather than a .exe.
There are of course some targets that are invariably "embedded", such as 8051, and a cross-compiler for such a target could be said to be exclusively "embedded" I guess, but it is a distinction that serves little purpose.
There are many versions of arm gcc toolchains. The established naming convention for cross compilers is,
arm-linux-xxx - this is a 'Regular C', where the library is supported by the Linux OS. The 'xxx' is an ABI format.
arm-none-xxx - this is the 'embedded C'. The 'xxx' again indicates an ABI. It is generally 'newlib' based. Newlib can be hosted by an RTOS and then it will be almost 'Regular C'. If it is 'baremetal' and the newlib features are defaulted to error this is most likely termed 'embedded C'.
Embedded 'C++' was a term that was common sometime ago, but has generally been abandoned as a technology. As stated, the compiler itself is independant. However, the library/OS is typically what defines the 'spirit' of what you asked.
The ABI can be something like 'gnueabhf' for hard floating point, etc. It is a calling convention between routines and often depends on whether there is an FPU on the system or not. It may also depend on ISA, PIC, etc.
The C language allows two different flavours of targets: hosted and freestanding. Hosted means programs running on top of a OS like Windows or Linux, freestanding meaning everything else. Examples of freestanding systems are "bare metal" microcontrollers, RTOS on microcontrollers, the hosted OS itself.
There are just a few minor differences between hosted and freestanding:
The standard library
Hosted compilers must support all mandatory parts of the C standard library.
Freestanding compilers only need to implement <float.h>, <iso646.h>, <limits.h>, <stdalign.h>, <stdarg.h>, <stdbool.h>, <stddef.h>, <stdint.h> and <stdnoreturn.h>. All other standard headers are optional.
Valid forms of main()
Hosted compilers must support int main(void) and int main(int argc, char *argv[]). Other implementation-defined forms of main() need not be supported.
Freestanding compilers always use an implementation-defined form of main(), since there is nobody to return to. The most common form is void main (void).
"Embedded C" just means C for embedded systems. It's not a formal term. (For example not to be confused with for example Embedded C++ which was actually a subset dialect of the C++ language.)
Embedded systems are usually freestanding systems. The gcc-arm-none-eabi compiler is the compiler port for freestanding ARM systems (using ARM's Embedded ABI). It will not come with various OS-specific libs.
When using gcc to compile for freestanding systems you should use the -ffreestanding option. This enables the common implementation-defined form of main() and together with -fno-builtin it might block various standard library function calls from getting "inlined" into your code. Which we don't want since those libs might not even be present.

Subtleties dealing with cross compilation, freestanding libgcc, etc

I have a few questions about https://wiki.osdev.org/Meaty_Skeleton, which says:
The GCC documentation explicitly states that libgcc requires the
freestanding environment to supply the memcmp, memcpy, memmove, and
memset functions, as well as abort on some platforms. We will satisfy
this requirement by creating a special kernel C library (libk) that
contains the parts of the user-space libc that are freestanding
(doesn't require any kernel features) as opposed to hosted libc
features that need to do system calls
Alright, I understand that libgcc is a 'private library' that is used by gcc, i.e. It has significance only during the compilation process, when gcc is in use, kind of like a helper for gcc. Is this right? I understand that the machine I run gcc on is called the build machine, and the host machine is my own OS. What is the freestanding environment specified here? Since gcc runs on the build machine, libgcc must use the build machine libgcc I guess? Where does freestanding come into the picture?
Also, what is the libk? I think that I really don't understand freestanding and hosted environments yet :(
libgcc
This library must be linked in some cases when you compile with GCC. Because GCC produces calls to functions in this library, if the target architecture doesn't support a specific feature.
For example, if you use 64-Bit arithmetic in your kernel and you want compile for i386, then GCC make a call to the specific function which resides in libgcc:
uint64_t div64(uint64_t dividend, uint64_t divisor)
{
return (dividend / divisor);
}
Here you get an undefined reference to __udivdi3 error from the linker when you try to link your kernel without libgcc. On the other side, libgcc also make calls to standard C library functions. So these specific functions must be implemented.
Here is another IMHO good explanation. It's called bare-metal programming here for the ARM architecture, but also valid for x86.

Cross Toolchain for ARM U-Boot Build Questions

I'm trying to build my own toolchain for an Raspberry-Pi.
I know there are plenty of prebuilt Toolchains. This work is for educational reasons.
I'm following the embedded arm linux from scratch book.
And succeeded in building a gcc and uClib so far.
I'm building for the target arm-unknown-linux-eabi.
Now that it comes to preparing a bootable filesystem i'm questioning myself about the bootloader build.
The part about the bootloader for this System seems to be incomplete.
Now I'm questioning myself how do I build a uboot for this System with my arm-unknown-linux-eabi toolchain.
Do I need to build a toolchain which doesn't depend on linux kernel calls.
My first reasearch lead me to the point that there are separate kind of tool chain
the OS dependent (linux kernel sys-calls etc...) and the ones which don't need to have a kernel underneath. Sometimes refered to as "Bare-Metal" toolchain or "standalone" toolchain.
Some sources mention that it would be possible to build an U-Boot with the linux toolchain.
If this is true why and how should this work?
And if I have to build a second toolchain for "Bare Metal" Toolchain where can I find informations about the difference between these two. Do I need another libstdc?
You can built U-Boot with the same cross-toolchain used to build the kernel - and most probably the rest of the user-space of the system.
A bootloader is - by definition - self-contained and doesn't care about your choice of C-runtime library because it doesn't use it. Therefore the issue of sys-calls doesn't come into it.
A toolchain is always going to need to be hosted by a fully functioning development system - invariably not your target system. Whatever references you see to a 'bare-metal toolchain' are not referring to the compiler's use of sys-calls (it relies heavily on the operating system for I/O). What is important when building bootloaders and kernels is that compiler and linker are configured to produce statically linked code that can run at specific memory address.
In almost all possible ways, there is no difference between the embedded and the Linux toolchain. But there is one exception.
That exception is __clear_cache - a function that can be generated by the compiler and in a "Linux"-toolchain includes a system call to synchronize instruction and data caches. (See http://blogs.arm.com/software-enablement/141-caches-and-self-modifying-code/ for more information about that bit.)
Now, unless you explicitly add a call to that function, the only way I know for it to be invoked is by writing nested functions in C (a GCC extension that should be avoided).
But it is a difference.

General questions about GCC and cross compiling

Recently I've been playing around with cross compiling using GCC and discovered what seems to be a complicated area, tool-chains.
I don't quite understand this as I was under the impression GCC can create binary machine code for most of the common architectures, and all that else really matters is what libraries you link with and what type of executable is created.
Can GCC not do all these things itself? With a single build of GCC, all the appropriate libraries and the correct flags sent to GCC, could I produce a PE executable for a Windows x86 machine, then create an ELF executable for an embedded Linux MIPS device and finally an executable for an OSX PowerPC machine?
If not can someone explain how you would achieve this?
With a single build of GCC, all the
appropriate libraries and the correct
flags sent to GCC, could I produce a
PE executable for a Windows x86
machine, then create an ELF executable
for an embedded Linux MIPS device and
finally an executable for an OSX
PowerPC machine? If not can someone
explain how you would achieve this?
No. A single build of GCC produces object code for one target architecture. You would need a build targeting Intel x86, a build targeting MIPS, and a build targeting PowerPC. However, the compiler is not the only tool you need, despite the fact that you can build source code into an executable with a single invocation of GCC. Under the hood, it makes use of the assembler (as) and linker (ld) as well, and those need to be built for the target architecture and platform. Usually GCC uses the versions of these tools from the GNU binutils package, so you'd need to build that for the target platform too.
You can read more about building a cross-compiling toolchain here.
I don't quite understand this as I was
under the impression GCC can create
binary machine code for most of the
common architectures
This is true in the sense that the source code of GCC itself can be built into compilers that target various architectures, but you still require separate builds.
Regarding -march, this does not allow the same build of GCC to switch between platforms. Rather it's used to select the allowable instructions to use for the same family of processors. For example, some of the instructions supported by modern x86 processors weren't supported by the earliest x86 processors because they were introduced later on (such as extension instruction sets like MMX and SSE). When you pass -march, GCC enables all opcodes supported on that processor and its predecessors. To quote the GCC manual:
While picking a specific cpu-type will
schedule things appropriately for that
particular chip, the compiler will not
generate any code that does not run on
the i386 without the -march=cpu-type
option being used.
If you want to try cross-compiling, and don't want to build the toolchain yourself, I'd recommend looking at CodeSourcery. They have a GNU-based toolchain, and their free "Lite" version supports quite a few architectures. I've used it for Linux/ARM and Android/ARM.

Toolchain for any ARM processor

Can a toolchain for any ARM processor be used to compile any operating system? What is the dependency of toolchain on OS?
My problem may sound trivial...I have no idea about toolchains for ARM.
Can a toolchain for any Arm processor be used to compile any
Operating system?
It depends on the target OS. If it has support for the ARM architecture (such as Linux) then only configuration and patches are missing, but generally yes.
What is the dependency of toolchain on
OS?
I'm only experienced in GCC, so I'd say binutils, glibc+kernel headers and then GCC. If you want threads, you'd need pthreads too.
See this article on how to bootstrap Linux on ARM. While it's rather old, the same process applies, with appropriate patches.
You might want to look at BuildRoot for building a toolchain to target Arm and other processors.
In general, no. The toolchain has compiler libraries that depend on the system libC libraries, and these come from the operating system (unless you're compiling for small "bare metal" systems without an operating system, in which case they come from somewhere else).
Thus, programs compiled with a given toolchain will only work on systems with a compatible libC. For instance, if you have a toolchain for ARM GlibC-based systems, it will work to compile programs for standard ARM Linux systems that use GlibC, but won't work on ARM uClinux systems that use uClibc, or on ARM bare-metal systems using Newlib.
There are some other minor dependencies as well (which I'm less familiar with), but that's the biggest one.
There are many cross platform compiler are available even many version of gcc also provides... to compile kernel for arm you need to get cross compiler and change the top level Makefile of kernel folder ex: ARCH = arm and CROSS_COMPILE = arm-linux-, the CROSS_COMPILE argument depends on where u kept the gcc-cross-platform tool...
Here ARCH stands for Architecture

Resources