HardFault with STM32 caused through GSL - arm

I successfully cross-compiled the GNU Scientific Library for my STM32F303 with an Arm Cortex M4, as I've described here:
How to crosscompile GSL for Arm Cortex M4?
However, this works fine, but now I got for every memory allocation from the GSL an HardFault. For example, this line:
gsl_vector_float *x = gsl_vector_float_alloc(2);
or this
T = gsl_multimin_fdfminimizer_conjugate_fr;
causes directly an HardFault. Has anyone an idea what the reason could be? I am pretty sure to have enough RAM (the IDE shows 59 kB free RAM at beginning). The problem holds only for GSL allocations, malloc stand-alone works perfectly.
Furthermore, I found on the internet some post that described thread-safety in the sense of using locks for malloc as a possible issue. Since the GSL is Thread-Safe, could this be the reason? Although I did not find any clue for using locks in the source code.

As in the comments described, I've used indeed the wrong linker script during cross-compiling (the default linker script). It worked after specifying the linker-script (I had to use the linker-script for the specific MCU).

Related

How to resolve the stack corrupt error in Embedded System Programming on ARM Coprtex

I am trying to program ARM Cortex M0+ MCU. Every alternate time, I get the Stack corrupt error message.
Is there any way to find out what can be the source of error?
I don't know about the way to resolve stack related error
One best practice is to use a static analysis tool to make sure that you are not trampling any stack or heap variables.
Try clang analyzer as an easily available open source solution.
Alternatively, if you can run your code on a host machine, you can use gdb or valgrind to try and find memory errors.

How can linux boot code be written in C?

I'm a newbie to learning OS development. From the book I read, it said that boot loader will copy first MBR into 0x7c00, and starts from there in real mode.
And, example starts with 16 bit assembly code.
But, when I looked at today's linux kernel, arch/x86/boot has 'header.S' and 'boot.h', but actual code is implemented in main.c.
This seems to be useful by "not writing assembly."
But, how is this done specifically in Linux?
I can roughly imagine that there might be special gcc options and link strategy, but I can't see the detail.
I'm reading this question more as an X-Y problem. It seems to me the question is more about whether you can write a bootloader (boot code) in C for your own OS development. The simple answer is YES, but not recommended. Modern Linux kernels are probably not the best source of information for creating bootloaders written in C unless you have an understanding of what their code is doing.
If using GCC there are restrictions on what you can do with the generated code. In newer versions of GCC there is an -m16 option that is documented this way:
The -m16 option is the same as -m32, except for that it outputs the ".code16gcc" assembly directive at the beginning of the assembly output so that the binary can run in 16-bit mode.
This is a bit deceptive. Although the code can run in 16-bit real mode, the code generated by the back end uses 386 address and operand prefixes to make normally 32-bit code execute in 16-bit real mode. This means the code generated by GCC can't be used on processors earlier than the 386 (like the 8086/80186/80286 etc). This can be a problem if you want a bootloader that can run on the widest array of hardware. If you don't care about pre-386 systems then GCC will work.
Bootloader code that uses GCC has another downside. The address and operand prefixes that get get added to many instructions add up and can make a bootloader bloated. The first stage of a bootloader is usually very constrained in space so this could potentially become a problem.
You will need inline assembly or assembly language objects with functions to interact with the hardware. You don't have access to the Linux C library (printf etc) in bootloader code. For example if you want to write to the video display you have to code that functionality yourself either writing directly to video memory or through BIOS interrupts.
To tie it altogether and place things in the binary file usable as an MBR you will likely need a specially crafted linker script. In most projects these linker scripts have an .ld extension. This drives the process of taking all the object files putting them together in a fashion that is compatible with the legacy BIOS boot process (code that runs in real mode at 0x07c00).
There are so many pitfalls in doing this that I recommend against it. If you are intending to write a 32-bit or 64-bit kernel then I'd suggest not writing your own bootloader and use an existing one like GRUB. In the versions of Linux from the 1990s it had its own bootloader that could be executed from floppy. Modern Linux relies on third party bootloaders to do most of that work now. In particular it supports bootloaders that conform to the Multiboot specification
There are many tutorials on the internet that use GRUB as a bootloader. OS Dev Wiki is an invaluable resource. They have a Bare Bones tutorial that uses the original Multiboot specification (supported by GRUB) to boot strap a basic kernel. The Mulitboot specification can easily be developed for using a minimal of assembly language code. Multiboot compatible bootloaders will automatically place the CPU in protected mode, enable the A20 line, can be used to get a memory map, and can be told to place you in a specific video mode at boot time.
Last year someone on the #Osdev chat asked about writing a 2 stage bootloader located in the first 2 sectors of a floppy disk (or disk image) developed entirely in GCC and inline assembly. I don't recommend this as it is rather complex and inline assembly is very hard to get right. It is very easy to write bad inline assembly that seems to work but isn't correct.
I have made available some sample code that uses a linker script, C with inline assembly to work with the BIOS interrupts to read from the disk and write to the video display. If anything this code should be an example why it's non-trivial to do what you are asking.

Cross Compiling For Bigendian - No valid architectures?

tl;dr
I can't compile glibc on powerpc/mips/armeb/sparc. How can I test bigendian without emulating a whole system?
Problem Description
I am currently trying to test some code to ensure that it works on a big-endian system (it doesn't) and fix any big-endian errors. Currently I am using qemu-system-ppc with a PowerPC debian image and compiling within that image, as well as running gdb there, etc. However, this is extremely inefficient and I know there is a better way.
As such I followed this tutorial to create a cross compiler so I could compile my source to a big-endian system, then run the user-space qemu-??? to test it without the overhead of running the entire OS in emulated space (in particular with the -S switch I could run gdb on host which is much faster).
Attempt #1:
First try was powerpc. At first I just ran the configure with --target=powerpc but realized that I should use a full triplet and went with powerpc-unknown-linux-gnu. Unfortunately, when I get to the glibc portion I run into the problem that my gcc does not support IBM 128-bit long doubles (it only supports IEEE). I googled around and there doesn't seem to be a fix (aside from updating gcc past v4.1 - I'm at 6.2 right now).
Attempt #2:
Let's try mips then, I have more experience with it anyways (from university). This makes it to the glibc portion again, but this time it complains that it can't determine the ABI. I tried it with all of -linux-gnu -gneabi -eabi and -none but it couldn't determine the ABI for any of them.
Attempt #3:
Alright, time to try armeb. This time it gets past determining the ABI, then dumps a message saying that the target is not yet supported by glibc. Same story with sparc.
Question
Given that all of the above failed - how can I compile a cross-compiler from my little-endian system to a big-endian system which will generate executables I can run in user-space with qemu?

Cross Toolchain for ARM U-Boot Build Questions

I'm trying to build my own toolchain for an Raspberry-Pi.
I know there are plenty of prebuilt Toolchains. This work is for educational reasons.
I'm following the embedded arm linux from scratch book.
And succeeded in building a gcc and uClib so far.
I'm building for the target arm-unknown-linux-eabi.
Now that it comes to preparing a bootable filesystem i'm questioning myself about the bootloader build.
The part about the bootloader for this System seems to be incomplete.
Now I'm questioning myself how do I build a uboot for this System with my arm-unknown-linux-eabi toolchain.
Do I need to build a toolchain which doesn't depend on linux kernel calls.
My first reasearch lead me to the point that there are separate kind of tool chain
the OS dependent (linux kernel sys-calls etc...) and the ones which don't need to have a kernel underneath. Sometimes refered to as "Bare-Metal" toolchain or "standalone" toolchain.
Some sources mention that it would be possible to build an U-Boot with the linux toolchain.
If this is true why and how should this work?
And if I have to build a second toolchain for "Bare Metal" Toolchain where can I find informations about the difference between these two. Do I need another libstdc?
You can built U-Boot with the same cross-toolchain used to build the kernel - and most probably the rest of the user-space of the system.
A bootloader is - by definition - self-contained and doesn't care about your choice of C-runtime library because it doesn't use it. Therefore the issue of sys-calls doesn't come into it.
A toolchain is always going to need to be hosted by a fully functioning development system - invariably not your target system. Whatever references you see to a 'bare-metal toolchain' are not referring to the compiler's use of sys-calls (it relies heavily on the operating system for I/O). What is important when building bootloaders and kernels is that compiler and linker are configured to produce statically linked code that can run at specific memory address.
In almost all possible ways, there is no difference between the embedded and the Linux toolchain. But there is one exception.
That exception is __clear_cache - a function that can be generated by the compiler and in a "Linux"-toolchain includes a system call to synchronize instruction and data caches. (See http://blogs.arm.com/software-enablement/141-caches-and-self-modifying-code/ for more information about that bit.)
Now, unless you explicitly add a call to that function, the only way I know for it to be invoked is by writing nested functions in C (a GCC extension that should be avoided).
But it is a difference.

LPC11xx Cortex-M0 FreeRTOS Hardfault

I have been working with a project on an NXP LPC11XX device and FreeRTOS. The issue is that the demo project for this uses some Eclipse based IDE and I wont have any of that. I got it converted to compile in linux and I can program without any issue. The problem I am seeing is when the demo project gets to a memset() a hard fault is generated in the CPU. This is not my code, but I have a feeling it is related to something that I did. I am using the codesourcery "gcc version 4.4.1 (Sourcery G++ Lite 2010q1-188)" compiler (which I have used without issue on Cortex-M3 devices). I see the hardfault generated on a damn "lsls" instruction which touches nothing in memory, see this massive pastebin of GDB output: http://pastebin.com/3pg0puSe (I don't know what common practice is here for large blocks of text like that.)
Any thoughts, anyone? Thanks for the help!
Hard to see, but the last instruction was blx r3.
It looks like R3 did not have its last bit set (for Thumb mode), its value is 0x481c.
This will IIRC cause an illegal instruction exception. Your debugger fools you in this case, as the value loaded into the PC by blx was probably invalid.
You might just forgot the linker option which sets the instruction mode/CPU model (and the set of libs to use) - so it linked against an ARM mode library.

Resources