RV32E version of the soft-float methods such as __divdi3 and __mulsi3 - c

I have managed to build an RV32E cross-compiler on my Intel Ubuntu machine by using the official riscv GitHub toolchain (github.com/riscv/riscv-gnu-toolchain) with the following configuration:-
./configure --prefix=/home/riscv --with-arch=rv32i --with-abi=ilp32e
The ip32e specifies soft float for RV32E. This generates a working compiler that works fine on my simple C source code. If I disassemble the created application then it does indeed stick to the RV32E specification. It only generates assembly for my code that uses the first 16 registers.
I use static linking and it pulls in the expected set of soft float routines such as __divdi3 and __mulsi3. Unfortunately the pulled in routines use all 32 registers and not the restricted lower 16 for RV32E. Hence, not very useful!
I cannot find where this statically linked code is coming from, is it compiled from C source and therefore being compiled without the RV32E restriction? Or maybe it was written as hand coded assembly that has been written only for the full RV32I instead of RV32E? I tried to grep around the source but have had no luck finding anything like the actual code that is statically linked.
Any ideas?
EDIT: Just checked in more details and the compiler is not generating using just the first 16 registers. Turns out with a simple test routine it manages to only use the first 16 but more complex code does use others as well. Maybe RV32E is not implemented yet?

The configure.ac file contains this code:
AS_IF([test "x$with_abi" == xdefault],
[AS_CASE([$with_arch],
[*rv64g* | *rv64*d*], [with_abi=lp64d],
[*rv64*f*], [with_abi=lp64f],
[*rv64*], [with_abi=lp64],
[*rv32g* | *rv32*d*], [with_abi=ilp32d],
[*rv32*f*], [with_abi=ilp32f],
[*rv32*], [with_abi=ilp32],
[AC_MSG_ERROR([Unknown arch])]
)])
Which seems to map your input of rv32i to the ABI ilp32, ignoring the e. So yes, it seems support for the ...e ABIs is not fully implemented yet.

Related

Linking additional code for the microcontroller (AVR) against already existing code

The problem definition:
There is a need to have two parts of the code in an AVR microcontroller, a fixed one that is always there and does not change (often), and a transient one, that is (not so) often to be replaced or appended. The challenge is to give the ability for the transient code to call functions and access global variables of the fixed one -- and vice versa.
It is quite obvious that there should be special methods for the fixed code to access transient one -- like having calculated function pointers in RAM and using only them to call transient code procedures.
For the calling in backwards direction, I was thinking about linking transient code against existing .elf file of the fixed code.
I'm using avr-gcc toolchain (as in ubuntu 20.20), gcc version 5.4.0
What's I've already tried:
adding '-shared' as a link argument when building fixed code -- appears to be unsupported for AVR (linker reports an error).
adding instead '-Wl,--export-dynamic' as a link argument -- it seems to be ignored, no .dynsym section appears in the elf.
There is still a .symtab section in the fixed code elf -- could that be somehow used to link against it?
Note: my division of 'fixed' and 'transient' code has nothing to do with boot-area of some AVR microcontroller, boot is just something I do not care here about.
Note2: The question is much alike this one, but gives clear explanation for the need.
You have to forget all big computer knowledge. 8 bits AVRs are timy microcontrollers. Code has to be linked statically. There is no other way.

How do I build newlib for size optimization?

I'm building an arm-eabi-gcc toolchain with Newlib 2.5.0 as the target C library.
The target embedded system would prefer smaller code size over execution speed. How do I configure newlib to favour smaller code size?
The default build does things like produce a version of strstr that is over 1KB in code size.
There is fat in Newlib that can be addressed with Newlib-nano, which is already part of GCC ARM Embedded, as discussed here (Note the article is from 2014, so the information may be out-dated, but there appears to be Newlib-nano support in the current v6-2017 too).
It removes some features added after C89 that are rarely used in MCU based embedded systems, simplifies complex functions such as formatted I/O, and removes wide character support from non-wide character specific functions. Critically in respect to this question the default build is already size optimised (-Os).
Configure newlib like this:
CFLAGS_FOR_TARGET="-DPREFER_SIZE_OVER_SPEED=1 -Os" \
../newlib-2.5.0/configure
(where I've omitted the rest of the arguments I used for configure, they don't change based on this issue).
There isn't a configure flag, but the configure script reads certain variables from the environment. CFLAGS_FOR_TARGET means flags used when building for the target system.
Not to be confused with CFLAGS_FOR_BUILD , which are flags that would be used if the build system needed to make any auxiliary executables to execute on the build system to help with the build process.
I couldn't find any official documentation on this, but searching the source code, it contained many instances of testing for PREFER_SIZE_OVER_SPEED or __OPTIMIZE_SIZE__. Based on a quick grep, these two flags are almost identical. The only difference was a case in the printf family that if a null pointer is passed for %s, then the former will translate it to (null) but the latter bulls on ahead , probably causing a crash.

Visual Studio - Compiling 32-bit code inside 64-bit project

So straight to my question: how can I compile my ASM files with a 32-bit ASM compiler, include it in my 64-bit project and access the compiled code by using the ASM file's function names?
If it is a bit unclear, I can elaborate:
I'm converting a project of mine from 32-bit to 64-bit and I ran into a technical problem. My project compiles an ASM file and use the compiled binary as input for it's usage.
When my project was 32-bit, it was quite easy. I included the ASM files in the project and added a build rule to compile them with Microsoft Macro Assembler - then I could access the compiled code from my 32-bit project by exported each function I wanted to access from the ASM to a .h header file and access it using the function name (I was able to do so because it was compiled to obj and the linker knew the symbols because I exported the prototypes to a .h file).
Now, I need to convert this code to 64-bit, but I still need the ASM to be compiled as 32-bit code and still be able to do the same thing (accessing the compiled 32-bit code from my 64-bit program).
However, when I try to compile it, it obviously doesn't recognize the instructions because now the whole project is being compiled as 64-bit code.
Thanks in advance.
If I were trying to embed 32-bit code inside a 64-bit program (which is a dubious thing to do, but let's say for the sake of argument that you have a good reason and actually know what you're doing with the result) — I'd take the 32-bit code, whether written in C, assembly, or something else — and compile it as a separate project, producing a DLL as output. This involves no extra weirdness in the compile chain: It's just an ordinary 32-bit DLL.
That 32-bit DLL can then be embedded in your 64-bit application as a binary resource — just a blob of memory that you can load and access.
So then how can you actually do anything with the compiled code in that DLL? I'd use a somewhat-hacked version of Joachim Bauch's MemoryModule library to access it. MemoryModule is designed to load DLLs from a hunk of memory and provide access to their exports — it's just like the Windows API's LoadLibrary(), only from memory instead of from a file. It's designed to do it for the same bit size as the calling process, but with a bit of hackery, you could probably make it compile as a 64-bit library, but able to read a 32-bit library. The resulting usages would be pretty simple:
// Load the embedded DLL first from the current module.
hresource = FindResource(hmodule, "MyLibrary.DLL", "binary");
hglobal = LoadResource(hmodule, hresource);
data = LockResource(hglobal);
size = SizeofResource(hmodule, hresource);
// Turn the raw buffer into a "library".
libraryHandle = MemoryLoadLibrary(data, size);
// Get a pointer to some export within it.
myFunction = MemoryGetProcAddress(libraryHandle, "myFunction");
That said, as I alluded to before (and others also alluded to), even if you can get pointers to the exports, you won't be able to invoke them, because the code's 32-bit and might not even be loaded at an address that exists below the 4GB mark. But if you really want to embed 32-bit code in a 64-bit application, that's how I'd go about it.

How to write your own code generator backend for gcc?

I have created my very own (very simple) byte code language, and a virtual machine to execute it. It works fine, but now I'd like to use gcc (or any other freely available compiler) to generate byte code for this machine from a normal c program. So the question is, how do I modify or extend gcc so that it can output my own byte code? Note that I do NOT want to compile my byte code to machine code, I want to "compile" c-code to (my own) byte code.
I realize that this is a potentially large question, and it is possible that the best answer is "go look at the gcc source code". I just need some help with how to get started with this. I figure that there must be some articles or books on this subject that could describe the process to add a custom generator to gcc, but I haven't found anything by googling.
I am busy porting gcc to an 8-bit processor we design earlier. I is kind of a difficult task for our machine because it is 8-bit and we have only one accumulator, but if you have more resources it can became easy. This is how we are trying to manage it with gcc 4.9 and using cygwin:
Download gcc 4.9 source
Add your architecture name to config.sub around line 250 look for # Decode aliases for certain CPU-COMPANY combinations. In that list add | my_processor \
In that same file look for # Recognize the basic CPU types with company name. add yourself to the list: | my_processor-* \
Search for the file gcc/config.gcc, in the file look for case ${target} it is around line 880, add yourself in the following way:
;;
my_processor*-*-*)
c_target_objs="my_processor-c.o"
cxx_target_objs="my_processor-c.o"
target_has_targetm_common=no
tmake_file="${tmake_file} my_processor/t-my_processor"
;;
Create a folder gcc-4.9.0\gcc\config\my_processor
Copy files from an existing project and just edit it, or create your own from scratch. In our project we had copied all the files from the msp430 project and edited it all
You should have the following files (not all files are mandatory):
my_processor.c
my_processor.h
my_processor.md
my_processor.opt
my_processor-c.c
my_processor.def
my_processor-protos.h
constraints.md
predicates.md
README.txt
t-my_processor
create a path gcc-4.9.0/build/object
run ../../configure --target=my_processor --prefix=path for my compiler --enable-languages="c"
make
make install
Do a lot of research and debugging.
Have fun.
It is hard work.
For example I also design my own "architecture" with my own byte code and wanted to generate C/C++ code with GCC for it. This is the way how I make it:
At first you should read everything about porting in the manual of GCC.
Also not forget too read GCC Internals.
Read many things about Compilers.
Also look at this question and the answers here.
Google for more information.
Ask yourself if you are really ready.
Be sure to have a very good cafe machine... you will need it.
Start to add machine dependet files to gcc.
Compile gcc in a cross host-target way.
Check the code results in the Hex-Editor.
Do more tests.
Now have fun with your own architecture :D
When you are finished you can use c or c++ only without os-dependet libraries (you have currently no running OS on your architecture) and you should now (if you need it) compile many other libraries with your cross compiler to have a good framework.
PS: LLVM (Clang) is easier to port... maybe you want to start there?
It's not as hard as all that. If your target machine is reasonably like another, take its RTL (?) definitions as a starting point and amend them, then make compile test through the bootstrap stages; rinse and repeat until it works. You probably don't have to write any actual code, just machine definition templates.

Delphi dcu to obj

Is there a way to convert a Delphi .dcu file to an .obj file so that it can be linked using a compiler like GCC? I've not used Delphi for a couple of years but would like to use if for a project again if this is possible.
Delphi can output .obj files, but they are in a 32-bit variant of Intel OMF. GCC, on the other hand, works with ELF (Linux, most Unixes), COFF (on Windows) or Mach-O (Mac).
But that alone is not enough. It's hard to write much code without using the runtime library, and the implementation of the runtime library will be dependent on low-level details of the compiler and linker architecture, for things like correct order of initialization.
Moreover, there's more to compatibility than just the object file format; code on Linux, in particular, needs to be position-independent, which means it can't use absolute values to reference global symbols, but rather must index all its global data from a register or relative to the instruction pointer, so that the code can be relocated in memory without rewriting references.
DCU files are a serialization of the Delphi symbol tables and code generated for each proc, and are thus highly dependent on the implementation details of the compiler, which changes from one version to the next.
All this is to say that it's unlikely that you'd be able to get much Delphi (dcc32) code linking into a GNU environment, unless you restricted yourself to the absolute minimum of non-managed data types (no strings, no interfaces) and procedural code (no classes, no initialization section, no data that needs initialization, etc.)
(answer to various FPC remarks, but I need more room)
For a good understanding, you have to know that a delphi .dcu translates to two differernt FPC files, .ppu file with the mentioned symtable stuff, which includes non linkable code like inline functions and generic definitions and a .o which is mingw compatible (COFF) on Windows. Cygwin is mingw compatible too on linking level (but runtime is different and scary). Anyway, mingw32/64 is our reference gcc on Windows.
The PPU has a similar version problem as Delphi's DCU, probably for the same reasons. The ppu format is different nearly every major release. (so 2.0, 2.2, 2.4), and changes typically 2-3 times an year in the trunk
So while FPC on Windows uses own assemblers and linkers, the .o's it generates are still compatible with mingw32 In general FPC's output is very gcc compatible, and it is often possible to link in gcc static libs directly, allowing e.g. mysql and postgres linklibs to be linked into apps with a suitable license. (like e.g. GPL) On 64-bit they should be compatible too, but this is probably less tested than win32.
The textmode IDE even links in the entire GDB debugger in library form. GDB is one of the main reasons for gcc compatibility on Windows.
While Barry's points about the runtime in general hold for FPC too, it might be slightly easier to work around this. It might only require calling certain functions to initialize the FPC rtl from your startup code, and similarly for the finalize. Compile a minimal FPC program with -al and see the resulting assembler (in the .s file, most notably initializeunits and finalizeunits) Moreover the RTL is more flexible and probably more easily cut down to a minimum.
Of course as soon as you also require exceptions to work across gcc<->fpc bounderies you are out of luck. FPC does not use SEH, or any scheme compatible with anything else ATM. (contrary to Delphi, which uses SEH, which at least in theory should give you an advantage there, Barry?) OTOH, gcc might use its own libunwind instead of SEH.
Note that the default calling convention of FPC on x86 is Delphi compatible register, so you might need to insert proper cdecl (which should be gcc compatible) modifiers, or even can set it for entire units at a time using {$calling cdecl}
On *nix this is bog standard (e.g. apache modules), I don't know many people that do this on win32 though.
About compatibility; FPC can compile packages like Indy, Teechart, Zeos, ICS, Synapse, VST
and reams more with little or no mods. The dialect levels of released versions are a mix of D7 and up, with the focus on D7. The dialect level is slowly creeping to D2006 level in trunk versions. (with for in, class abstract etc)
Yes. Have a look at the Project Options dialog box:
(High-Res)
As far as I am aware, Delphi only supports the OMF object file format. You may want to try an object format converter such as Agner Fog's.
Since the DCU format is proprietary and has a tendency of changing from one version of Delphi to the next, there's probably no reliable way to convert a DCU to an OBJ. Your best bet is to build them in OBJ format in the first place, as per Andreas's answer.

Resources