Eclipse CDT static library at constant location? - c

ARM cortex embedded application.
I have a bootloader that is 40KB, mostly crypto for signature of flash.
I have a 500KB flash.
The chip’s sectors are 128KB. And the protection scheme means the first sector must contain the bootloader. Meaning my 40KB bootloader will occupy all 128KB of this sector.
I would like to retake some of this wasted space by including my application crypto libraries.
SECTOR0
[64KB bootloader reserved]
[64KB statics]
SECTORn
[application references static]
…
Can I put a static crypto library at a specific location like this with CDT/GCC and with the managed builder?
I am considering this part of the “bootloader code” so I am assuming that I will at least need to modify the .ld file to create a new region?
The application will need to know the results from this linking, but to not include the code and place in memory itself - is this still a static link? There are maybe only three or four functions I want to access from application in the library.
(There are no MPU issues to deal with, I am confident I can jump to the static code area and back even though it’s in the first sector. This static area is intended to be execute only. I am confident only my application software can be installed and executed in the flash region)

Related

How to reserve a Flash memory sector on STM32/platformio/arduino

I am using STM32F401CE with Platformio and the Arduino framework. I would like to reserve a block of 16K Flash memory as a EEPROM replacement for occasional settings written by my app.
How can I cleanly reserve a 16K flash block such that the linker will not place any code there and in a way that doesn't require manual editing of platformio managed files?
I saw examples when the address of a block is just used without the linker being aware of it or examples when the linker script that is managed by platform io is manually edited but can easily be lost if updating or replicating the project on another machine. Another example I found was AN2594 which emulates a EEPROM using two flash blocks and is an overkill to what I need.
I am thinking, maybe adding to my project an assembly file that will reserve a flash section at a fixed address but not sure if this is a good idea or how to implement it.
This question is not on how to read/write to flash but just how to safely reserve a block of it.
STM32F401CE flash blocks

Which uses more RAM at run time, dynamic linking or static linking?

I know that dynamic linking are smaller on disk but do they use more RAM at run time. Why if so?
The answer is "it depends how you measure it", and also "it depends which platform you're running on".
Static linking uses less runtime RAM, because for dynamic linking the entire shared object needs to be loaded into memory (I'll be qualifying this statement in a second), whilst with static linking only those functions you actually need are loaded.
The above statement isn't 100% accurate. Only the shared object pages that actually contain code you use are loaded. This is still much less efficient than statically linking, which compresses those functions together.
On the other hand, dynamic linking uses much less runtime RAM, as all programs using the same shared object use the same in RAM copy of the code (I'll be qualifying this statement in a second).
The above is a true statement on Unix like systems. On Windows, it is not 100% accurate. On Windows (at least on 32bit Intel, I'm not sure about other platforms), DLLs are not compiled with position independent code. As such, each DLL carries the (virtual memory) load address it needs to be loaded at. If one executable links two DLLs that overlap, the loader will relocate one of the DLLs. This requires patching the actual code of the DLL, which means that this DLL now carries code that is specific to this program's use of it, and cannot be shared. Such collisions, however, should be rare, and are usually avoidable.
To illustrate with an example, statically linking glibc will probably cause you to consume more RAM at run time, as this library is, in all likelihood, already loaded in RAM before your program even starts. Statically linking some unique library only your program uses will save run time RAM. The in-between cases are in-between.
Different processes calling the same dll/so file can share the read-only memory pages this includes code or text pages.
However each dll loaded in a given peogram has to have its own page for writable global or static data. These pages may be 4/16/64k or bigger depending on the OS. If one statically linked, the static data can be shared in one pages.
Programs, when running on common operating systems like Linux, Windows, MacOSX, Android, ...., are running as processes having some virtual address space. This uses virtual memory (implemented by the kernel driving the MMU).
Read a good book like Operating Systems: Three Easy Pieces to understand more.
So programs don't consume directly RAM. The RAM is a resource managed by the kernel. When RAM becomes scarce, your system experiments thrashing. Read also about the page cache and about memory overcommitment (a feature that I dislike and that I often disable).
The advantage of using a shared library, when the same library is used by several processes, is that its code segment is appearing (technically is paged) only once in RAM.
However, dynamic linking has a small overhead (even in memory), e.g. to resolve relocations. So if a library is used by only one process, that might consume slightly more RAM than if it was statically linked. In practice you should not bother most of the time, and I recommend using dynamic linking systematically.
And in practice, for huge processes (such as your browser), the data and the heap consumes much more RAM than the code.
On Linux, Drepper's paper How To Write Shared Libraries explains a lot of things in details.
On Linux, you might use proc(5) and pmap(1) to explore virtual address spaces. For example, try cat /proc/$$/maps and cat /proc/self/maps and pmap $$ in a terminal. Use ldd(1) to find out the dynamic libraries dependencies of a program, e.g. ldd /bin/cat. Use strace(1) to find out what syscalls(2) are used by a process. Those relevant to the virtual address space include mmap(2) and munmap, mprotect(2), mlock(2), the old sbrk(2) -obsolete- and execve(2).

BootLoader and Application memory size in AVR

I searched a lot a bout this question and I did not find any clear answer yet.
As you know, AVR microcontrollers e.g. Atmega128 have a Flash memory which can be divided into Bootloader and Application memory. I've adjusted the parameters of each one and loaded my boot and application load. Is there any way (using code or from terminal) to know the exact size of each memory and the available bytes????
Some people may be mention avr-size command. This command give me the size of the whole flash memory. I want to distinguish between boot and application memory.
Thanks in Advance
You have two firmwares, the bootloader and application, each will have its own size.
For each build, add the linker flag to your linking command line -print-memory-usage to make it print how much flash and RAM is used.
(This flag is not supported by every tool-chain, but AVR might support it)
More info: https://stackoverflow.com/a/41389481/2002198
Or, you can get the memory usage with avr-size:
avr-size -C --mcu=atmega168 project.elf
Reference: http://www.avrfreaks.net/forum/know-code-size-and-data-size
There's other detail you have to be aware: Depending the way you are loading your application (flash writing vs bootload loading) you will align the application with FLASH blocks (usually 2 kibs). Depending the way you are doing you will have smaller available flash memory to the application.
Just read the manual:
The actual address of the start of the Boot Flash section is deter-
mined by the BOOTSZ fuses
and you will find your answer.
If you have already built your bootloader then you should be able to tell how big it is either by looking carefully at the steps you performed to build it, or by examining the HEX file for the bootloader. The HEX file says exactly what addresses the code in it are written to.

For the ARM in C language, how can I know the data is in the internal flash or in the external flash?

I am developping a project of ARM in C language. Now I need to extend a struct array from 10 to 100, so I need to know if the memory is enough. The external flash is connected by SPI. how can I know the data is in the internal flash or in the external flash? The software I use is IAR Embedded Workbench.
That is going to be determined based your device. The internal and external memory should be mapped to two different blocks in memory. You can probably figure out what section is mapped where by looking at the linker output files. You should be able to control what variables are mapped to what part of memory by using linker commands, but those are going to be specific to the tools you're using.

How to write a custom kernel on mac?

I've been following the "Mike OS Guide" to make my own kernel, and I got it working. But then I went onto the many guides on the internet for making a boot sector in NASM that loads a main function from a compiled C object. I have tried compiling and linking with all kinds of GCC installations:
x86_64-pc-linux-
arm-uclinux-elf-
arm-agb-elf-
arm-elf-
arm-apple-darwin10-
powerpc-apple-darwin10-
i686-apple-darwin10-
i586-pc-linux-
i386-elf-
All of them fail once I put them onto a floppy like I do with the MikeOS bootstrap. I've tried various tutorials on http://www.osdever.net/ like the one here and I've tried http://wiki.osdev.org/Bare_Bones , but none work when trying to compile on a Mac, yet I have not tired on an actual Linux machine yet. But I was wondering how I could get a bootstrap in assembly the calls the C function and put them together into a working kernel file and then load the onto a floppy file then onto an ISO like in the MikeOS tutorial. Or should I just make the kernel.bin and load it with syslinux? Could anyone give me a tip on how to make this all work on a Mac developement environment? I have tolls via macports and homebrew so that helps. Anyone successively done this?
EDIT
Here's my bootsector so far.
I just wanna know how to jump to an extern function from the C and link it.
There's a few problems with this. First of all, all the compilers you mentioned output either 32-bit or 64-bit code. That's great, but when the boot sector starts, it's running in 16-bit real mode. If you want to be able to run that 32-bit or 64-bit code, you'll need to first switch to the appropriate mode (32-bit protected mode for, well, 32-bit, and long mode for 64-bit).
Then, once you switch to the appropriate mode, you don't even have that much space for code: boot sectors are 512 bytes; two bytes are reserved for the bootable signature, and you'll need some bytes for the code that switches to the appropriate mode. If you want to be able to use partitions on that disk or maybe a FAT filesystem, take away even more usable bytes. You simply won't have enough space for all but the most trivial program.
So how do real operating systems deal with that? Real operating systems tend to use the boot sector to load a bigger bootloader from the disk. Then that bigger bootloader can load the actual kernel and switch to the appropriate mode (although that might be the responsibility of the loaded kernel — it depends).
It can be a lot of work to write a bootloader, so rather than rolling your own, you may want to use GRUB and have your kernel comply to the Multiboot standard. GRUB is a bootloader which will be able to load your kernel from the disk (probably in ELF format) and jump to the entry point in 32-bit protected mode. Helpful, right?
This does not free you from learning assembly, though: The entry point of the kernel must be assembly. Often, all it does is set up a little stack and pass the appropriate registers to a C function with the correct calling convention.
You may think that you can just copy that rather than writing it yourself, and you'd be right, but it doesn't end there. You also need assembly for (at least):
Loading a new global descriptor table.
Handling interrupts.
Using non-memory-mapped I/O ports.
…and so on, not to mention that if you have to debug, you may not have a nice debugger; instead, you'll have to look at disassemblies, register values, and memory dumps. Even if your code is compiled from C, you'll have to know what the underlying assembly does or you won't be able to debug it.
In summary, your main problem is not knowing assembly. As stated before, assembly is essential for operating system development. Once you know assembly thoroughly, then you may be able to start writing an operating system.

Resources