Embox compilation and flashing - c

I am interested in attempting to compile, package and flash Embox to an MCU, from either a Windows or Mac machine (cross-compilation), via JTAG, and I have a number of concerns.
Observe what I believe to be the normal way of writing Embox apps and deploying/flashing them to an MCU:
As you can see, in my understanding above:
Embox source code is compiled (via make) into some file (Object file?), <file>.<ext>
At the same time, my app's source code is cross-compiled (somehow) into an Object file (myapp.o) that is compatible with Embox
Some tool combines: (a) the Embox <file>.<ext> produced by make, (b) myapp.o and (c) any other libs I specify. It takes these as inputs and produces a single application image that is ready to be flashed via JTAG
My concerns, also identified in the illustration:
What is the exact name and file extension of the Embox "artifact" produced by running make on Embox source code? Is this artifact different depending on whether you are on Windows or Mac? What tools besides make are necessary to produce this artifact?
What is this magic tool that takes in Object/interim files and produces a single app image? What is the exact name and file extension of this app image?
What tools do I need to cross-compile myapp.c and myapp.h into a myapp.o that is Embox compatible?

I'm one of the Embox developers, who is responsible for build tools.
Basile has given a correct overview of the process as a whole:
the source is compiled into a relocatable embox.o with ld -r,
then linked into embox ELF binary,
... which in turn is additionally transformed into .bin and .srec that are used for flashing quite often.
Binary artifacts go into build/base/bin.
Let me add few details.
First of all, you'll probably won't need to dig into the details of linking your application against Embox. Instead, the proper way is to integrate your app into Mybuild - the Embox build system - and let it handle low-level linkage details for you. So it's worth building vanilla Embox first and see it running on an emulator.
Building and running Embox for ARM
I would suggest you to start with arm/qemu template. This won't fit into your MCU, but if your application doesn't do something unusual, it would be easier to test it on QEMU before porting it to the target MCU. Anyway, this is a good starting point to check that a dev environment is sane and everything builds OK.
Developing on Linux would really make your life easier, like just sudo apt-get install build-essential plus few packages and installing a cross-compiler. However, if you're going to develop on Windows, you may find this guide useful. In addition, you'll need to patch make to make it work on Windows: Issue 504. And here is how to setup QEMU.
The recommended cross-compiler for ARM is the official GNU Tools for ARM.
So basically, here are steps to prepare a Linux dev machine for building and running Embox for ARM:
sudo add-apt-repository -y ppa:terry.guo/gcc-arm-embedded
sudo apt-get update
sudo apt-get install build-essential gcc-arm-none-eabi u-boot-tools qemu-system
Clone the repository:
git clone https://github.com/embox/embox embox
cd embox
Build arm/qemu:
make confload-arm/qemu
make
confload target initializes a conf/ directory with a predefined template called arm/qemu. conf/ is where a configuration of the target image (Embox + your app) resides.
Run QEMU. There's a handy wrapper for that, that infers the necessary options from the Embox configuration and runs QEMU properly:
sudo ./scripts/qemu/auto_qemu
If everything goes fine, you'll see something like that. Type help to list available commands.
Adding your application as an Embox command
Personally, when trying something new to me I usually "mimic" someone else's approaches to get the first feedback from the application/system. Here, I'd suggest to derive, e.g. an existing cat command, throw everything away from it, effectively turning it into a Hello world application.
For now, create a directory hello in src/cmds and add there two files:
hello.c file
/**
* Plain C Hello World application.
*/
#include <stdio.h>
int main(int argc, char **argv) {
printf("Hello world!\n");
return 0;
}
As you can see, this is a regular C program, that doesn't use any Embox-specific APIs. Let's integrate it into Embox now.
Hello.my file
(refer to Cat.my):
package embox.cmd.hello
#AutoCmd
#Cmd(name = "hello",
help = "<This is what `help hello` will output>",
man = '''
<What is shown when running `man hello`>
''')
module hello {
source "hello.c"
depends embox.compat.libc.all // for stdio
}
Now add the newly defined module into the configuration.
conf/mods.config file
package genconfig
configuration conf {
...
include embox.cmd.hello.hello
}
Build and run
Run make. This will compile hello.c and link it with Embox appropriately. After that, run it with sudo ./scripts/qemu/auto_qemu and type hello into the embox>prompt.
That's it.
Regarding your questions
To summarize:
Embox source code is compiled (via make) into some file (Object file?), <file>.<ext>
At the same time, my app's source code is cross-compiled (somehow) into an Object file (myapp.o) that is compatible with Embox
Both your application and Embox itself are compiled together with a regular (cross-)compiler, defined in conf/build.conf through a CROSS_COMPILE variable.
Some tool combines: (a) the Embox <file>.<ext> produced by make, (b) myapp.o and (c) any other libs I specify. It takes these as inputs and produces a single application image that is ready to be flashed via JTAG
These are linked with ld as part of the build process.
What is the exact name and file extension of the Embox "artifact" produced by running make on Embox source code? Is this artifact different depending on whether you are on Windows or Mac?
The main build artifacts are build/base/bin/embox (ELF) and build/base/bin/embox.bin (binary). If I'm not mistaken, these all have the same extension on all build platforms (well, may be there will be embox.exe instead of embox, but that's unlikely).
What tools besides make are necessary to produce this artifact?
The cross-compiler, essentially. GNU Tools for ARM embedded processors is a good choice.
Plus some quirks in case of Windows (see above).
What is this magic tool that takes in Object/interim files and produces a single app image? What is the exact name and file extension of this app image?
There's no such magic tool. :)
What tools do I need to cross-compile myapp.c and myapp.h into a myapp.o that is Embox compatible?
This is, again, hidden beneath Mybuild. In a nutshell, it:
compiles myapp.c using the cross-compiler into myapp.o
if the app is defined as a #AutoCmd module, it:
registers the app in a command registry by storing a pointer to main among with some metadata like name
strips away the main symbol from the object file to prevent conflicts in case of multiple apps
links myapp.o as it were a part of Embox into embox.o and then into embox ELF

Its the first time I heard about Embox, but the tool to combine Embox with your code is obviously a linker (so a cross ld from binutils, see documentation of ld). To understand more about linkers, read Levine's book Linkers and loaders
The Embox produced by compiling Embox source code is probably a library (libembox.a), or a relocatable object file embox.o - possibly produced by ld -r).
The produced application image is probably a raw binary file (.bin) but it could be an ELF file if it is loaded by the GRUB loader.
I guess that the building process is quite similar to the Linux kernel build process.
BTW, I would imagine that developing on a Linux system could be simpler, since Linux uses the same kind of tools daily. So you might install Linux on your development laptop.
You need a cross-compiler (for your target platform) to compile your code to be combined with Embox.

Related

How can I make a binary that uses openmp and compiled with intel's C compiler portable?

Normally I compile code (all in a single file main.c) with the intel oneapi command prompt like so
icl.exe main.c -o binary_name
I can then run binary_name.exe without issue from a regular command prompt. However, when I recently exploited openmp multithreading and compiled like so.
icl.exe main.c -o binary_name /Qopenmp /MD /link libiomp5md.lib
Then, when I try to run it through an ordinary command prompt, I get this message:
I'd ultimately like to move this simple code around (say, to another computer with the same OS). Is there some procedure through a command prompt or batch file for packaging and linking a dynamic library? It is also looking like statically linking for openmp is not supported on windows
Either make a statically linked version, or distribute the dependency DLL file(s) along with the EXE file.
You can check the dependencies of your EXE with Dependency Walker.
As you correctly statedk statically linking for OpenMP is not supported on Windows. Depending on your use case you have a couple of options. The simplest one for simple testing is to just ship the Dynamic-Link Library with you executable and place it in the same directory in the target system. Having built a lot of systems using DLLs, this is typically what most developers do to ensure capability with their code in a production environment even.
If you are looking to do something more complex on the target system you can place the Dynamic-Link library in a shared location and follow the search order suggestions from the Microsoft Build site:
https://learn.microsoft.com/en-us/windows/win32/dlls/dynamic-link-library-search-order

Make clangd aware of macros given from the compiler

I have two executables that are build from the same source (a client and a server) and they're built with the compile options -D CLIENT=0 -D SERVER=1 for the server and -D CLIENT=1 -D SERVER=0 for the client. If I do something like
if (CLIENT) {
// Client specific code
}
clangd complains that CLIENT is not defined. Is there a way to make clangd aware of those macros? (The code compiles just fine, the errors are from clangd, not the compiler)
Is there a way to make clangd aware of those macros?
From getting started with clangd:
Project setup
To understand source code in your project, clangd needs to know the
build flags. (This is just a fact of life in C++, source files are not
self-contained.)
By default, clangd will assume that source code is built as clang
some_file.cc, and you’ll probably get spurious errors about missing
#included files, etc. There are a couple of ways to fix this.
compile_commands.json
compile_commands.json file provides compile commands for all source
files in the project. This file is usually generated by the build
system, or tools integrated with the build system. Clangd will look
for this file in the parent directories of the files you edit. Other
tools can also generate this file. See the compile_commands.json
specification.
compile_commands.json is typically generated with CMake build system, but more build systems try to generate it.
I would suggest moving your project to CMake, in the process you will learn this tool that will definitely help you in further C-ish development.
compile_flags.txt
If all files in a project use the same build flags, you can put those
flags, one flag per line, in compile_flags.txt in your source root.
Clangd will assume the compile command is clang $FLAGS some_file.cc.
Creating this file by hand is a reasonable place to start if your
project is quite simple.
If not moving to cmake, create a compile_flags.txt file with the content for example like the following, and clangd should pick this file up:
-DCLIENT=1
-DSERVER=1

How to use cmake on the machine which cmake is not installed

I am using the cmake to build my project. However, I need to build this project on a machine that I do not have the permission to install any software on it. I thought I can use the generated makefile but it has the dependencies on CMake,and says cmake:command not found.Is there any solution that force the generated makefile do not have any cmake related command such as check the system version? Thanks
Is there any solution that force the generated makefile do not have any cmake related command such as check the system version?
No. There is no incentive for cmake to provide such an option, because the whole point of the cmake system is that the cmake program examines the build machine and uses what it finds to generate a Makefile (if you're using that generator) appropriate to the machine. The generated Makefiles are tailored to the machine, and it is not assumed that they would be suitable for any other machine, so there is no reason to suppose that one would need to use one on a machine that does not have cmake. In fact, if you look at the generated Makefiles you'll find all sorts of dependencies on cmake.
Depending on the breadth of your target machine types, you might consider the Autotools instead. Some people dislike them, and they're not a good choice if you want to support Microsoft's toolchain on Windows, but they do have the advantage that an Autotools-based build system can be used to build software on machines that do not themselves have the Autotools installed.
one easy solution is to use static libraries and the 'static' parameter in the command line.
Then you should be able to drop the executable on the target machine and run it.

Cross build third-party library locations on Linux

Ive been cross compiling my unit-tests to ensure they pass on all the platforms of interest, e.g. x86-linux, win32, win64, arm-linux
they unit tests require the CUnit library
So I've had to cross compile that also for each platform
That comes with its own autoconf stuff so you can easily cross-build it by specifying --host for configure
The question I have is where is the 'correct' place to have the CUnit libs installed for the various platforms? i.e. what should I set --prefix to for configure?
My initial guess was:
/usr/local/<platform>/lib/Cunit
i.e. setting --prefix /usr/local/<platform>
e.g. --prefix /usr/local/arm-linux-gnueabihf
which on sudo make install gives you:
/usr/local/arm-linux-gnueabihf/doc/CUnit
/usr/local/arm-linux-gnueabihf/include/CUnit
/usr/local/arm-linux-gnueabihf/lib
/usr/local/arm-linux-gnueabihf/share/CUnit
Obviously, if i don't specify a prefix for configure, each platform build overwrites the prev one which is no good
to then successfully link to these platform specific libs i need to specify the relevant lib dir for each target in its own LDFLAGS in the Makefile
Is this the right approach? Have I got the dir structure/location right for this sort of cross-build stuff? I assume there must be a defacto approach but not sure what it is..
possibly configure is supposed to handle all this stuff for me? maybe I just have to set --target correctly and perhaps --enable-multilib? all with --prefix=/usr/local?
some of the error msgs i get suggest /usr/lib/gcc-cross might be involve?
From reading more about cross compilation and the Gnu configure and build system it seems that I should just be setting the --target option for the configure step
but how do you know what the target names are? are they some fragment of the cross compiler names?
The 3 cross compilers I am using are:
arm-linux-gnueabihf-gcc-4.8
i686-w64-mingw32-gcc
x86_64-w64-mingw32-gcc
allowing me to cross-compile for ARM, win32 and win64
my host is 32 bit ubuntu, which I think might be --host i386-linux, but it seems that configure should get this right as its default
This is the procedure I finally figured out and got to work:
for each of my 3 cross-build tools (arm, win32, win64) my calls to configure looked like:
./configure --host=arm-linux-gnueabihf --build=i686-pc-linux-gnu --prefix=/usr/local/arm-linux-gnueabihf
./configure --host=i686-w64-mingw32 --build=i686-pc-linux-gnu --prefix=/usr/local/i686-w64-mingw32
./configure --host=x86_64-w64-mingw32 --build=i686-pc-linux-gnu --prefix=/usr/local/x86_64-w64-mingw32
each of these was followed by make, sudo make install
prior to calling configure for the arm cross build i had to do:
ln -s /usr/bin/arm-linux-gnueabihf-gcc-4.8 /usr/bin/arm-linux-gnueabihf-gcc
this was because the compiler had -4.8 tagged on the end so configure could not correctly 'guess' the name of the compiler
this issue did not apply to either the win32 or win64 mingw compilers
Note an additional gotcha was that when subsequently trying to link to these cross compiled CUnit libs, none of the cross compilers seemed to look in /usr/local/include by default so I had to manually add:
-I/usr/local/include
for each object file build
e.g. i added /usr/local/include to INCLUDE_DIRS in my Makefile
all this finally seems to have given me correctly cross built CUnit libs and I have successfully linked to them to produce cross built unit test binaries for each of the target platforms.
not at all easy and I would venture to call the configure option settings 'counter-intuitive' - as ever it is worth taking the time to read the relevant docs - this snippet was pertinent:
There are three system names that the build knows about: the machine
you are building on (build), the machine that you are building for
(host), and the machine that GCC will produce code for (target). When
you configure GCC, you specify these with --build=, --host=, and
--target=.
Specifying the host without specifying the build should be avoided, as
configure may (and once did) assume that the host you specify is also
the build, which may not be true.
If build, host, and target are all the same, this is called a native.
If build and host are the same but target is different, this is called
a cross. If build, host, and target are all different this is called a
canadian (for obscure reasons dealing with Canada's political party
and the background of the person working on the build at that time).
If host and target are the same, but build is different, you are using
a cross-compiler to build a native for a different system. Some people
call this a host-x-host, crossed native, or cross-built native.
and also:
When people configure a project like './configure', man often meets
these three confusing options, which are more related with
cross-compilation
--host: In which system the generated program will run.
--build: In which system the program will be built.
--target: this option is only used to build a cross-compiling
toolchain. When the tool chain generates executable program, in which target
system the program will run.
An example of tslib (a mouse driver library)
'./configure --host=arm-linux --build=i686-pc-linux-gnu': the
dynamically library is built on a x86 linux computer but will be used
for a embedded arm linux system.

Dealing with static libraries when porting C code from one operating system to another

I have been working on some C code on a windows machine and now I am in the process of transferring it to a Linux computer where I do not have full privileges. In my code I link to several static libraries.
Is it correct that these libraries need to be re-made for a Linux computer?
The library in question is GSL-1.13 scientific library
Side question, does anyone have a pre-compiled version of the above for Linux?
I have tried using automake to compile the source on the Linux machine, but no makefile seems to be created and no error is output.
Thanks
Yes, you do need to compile any library again when you switch from Windows to GNU/Linux.
As for how to do that, you don't need automake to build GSL. You should read the file INSTALL that comes inside the tarball (the file gsl-1.16.tar.gz) very carefully. In a nutshell, you run the commands
$ ./configure
$ make
inside the directory that you unpacked from the tarball.

Resources