I trying to develop an application to decode AAC on an ARM Cortex A9 processor. I will not be using an OS. Therefore this will be a bare-metal application.
Are there any libraries already available for this?
I used mstorsjo-fdk-aac library on windows and Ubuntu. Is it possible to compile it for ARM to run on without an OS?
Can anyone point me in the correct direction? I searched a lot in the internet but could not get anywhere.
Thank you.
The fdk-aac codec library does depend on a libc, for standard C functions like malloc, free, math functions like sin, stdio for file handling etc. If you run on bare metal, you need to provide these functions somehow. (If you don't need the sublibrary for opening files, you probably don't need to provide the stdio functions, or it is ok to replace them with stubs.) Even if the files are C++, they don't seem to use the standard C++ library, so you probably don't need to provide that.
Luckily most of these functions seem to be separated out for easy replacement/redirection - have a look at libSYS/src/genericStds.cpp to see which facilities it depends on from the platform.
Related
I'm writing an OS that should run on a variety of SoCs (e.g: Xilinx Zync, Freescale QorIQ).
My problem, not all of the provided IDEs (given by Xilinx, Freescale, etc.) provide the same libraries (standard C & POSIX libraries).
For instance, the CodeWarrior IDE has the timespec structure, while Xilinx's doesn't.
Also, sleep is implemented in some of the provided libs, but I have my own implementation.
I want my code to be independent of the compiler (some manufacturers provide more than one IDE and with a different compiler).
Any suggestions?
My suggestion: Code to POSIX standards. Where the vendor library falls short of POSIX, implement a POSIX layer yourself.
Leave the core OS generally #ifdef-free, and put the mess in a conditionally-compiled compatibility layer.
The simple (though longer-to-implement) solution is to not depend on the library provided by the vendor. Write your own library. Probably this can be done with a little bit of layering. All of them provide strlen(), for example.
Is it possible to write code in C, then statically build it and make a binary out of it like an ELF/PE then remove its header and all unnecessary meta-data so to create a raw binary and at last be able to put this raw binary in any other kind of OS specific like (ELF > PE) or (PE > ELF)?!
have you done this before?
is it possible?
what are issues and concerns?
how this would be possible?!
and if not, just tell me why not?!!?!
what are my pitfalls in understanding the static build?
doesn't it mean that it removes any need for 3rd party and standard as well as os libs and headers?!
Why cant we remove the meta of for example ELF and put meta and other specs needed for PE?
Mention:
I said, Cross OS not Cross Hardware
[Read after reading below!]
As you see the best answer, till now (!) just keep going and learn cross platform development issues!!! How crazy is this?! thanks to philosophy!!!
I would say that it's possible, but this process must be crippled by many, many details.
ABI compatibility
The first thing to think of is Application Binary Interface compatibility. Unless you're able to call your functions the same way, the code is broken. So I guess (though I can't check at the moment) that compiling code with gcc on Linux/OS X and MinGW gcc on Windows should give the same binary code as far as no external functions are called. The problem here is that executable metadata may rely on some ABI assumptions.
Standard libraries
That seems to be the largest hurdle. Partly because of C preprocessor that can inline some procedures on some platforms, leaving them to run-time on others. Also, cross-platform dynamic interoperation with standard libraries is close to impossible, though theoretically one can imagine a code that uses a limited subset of the C standard library that is exposed through the same ABI on different platforms.
Static build mostly eliminates problems of interaction with other user-space code, but still there is a huge issue of interfacing with kernel: it's int $0x80 calls on x86 Linux and a platform-specifc set of syscall numbers that does not map to Windows in any direct way.
OS-specific register use
As far as I know, Windows uses register %fs for storing some OS-wide exception-handling stuff, so a binary compiled on Linux should avoid cluttering it. There might be other similar issues. Also, C++ exceptions on Windows are mostly done with OS exceptions.
Virtual addresses
Again, AFAIK Windows DLLs have some predefined address they're must be loaded into in virtual address space of a process, whereas Linux uses position-independent code for shared libraries. So there might be issues with overlapping areas of an executable and ported code, unless the ported position-dependent code is recompiled to be position-independent.
So, while theoretically possible, such transformation must be very fragile in real situations and it's impossible to re-plant the whole static build code - some parts may be transferred intact, but must be relinked to system-specific code interfacing with other kernel properly.
P.S. I think Wine is a good example of running binary code on a quite different system. It tricks a Windows program to think it's running in Windows environment and uses the same machine code - most of the time that works well (if a program does not use private system low-level routines or unavailable libraries).
So I have a minimal OS that doesn't do much. There's a bootloader, that loads a basic C kernel in 32-bit protected mode. How do I port in a C library so I can use things like printf? I'm looking to use the GNU C Library. Are there any tutorials anywhere?
Ok, porting in a C library isn't that hard, i'm using Newlib in my kernel. Here is a tutorial to start: http://wiki.osdev.org/Porting_Newlib.
You basically need to:
Compile the library (for example Newlib) using your cross compiler
Provide stub-implementations for a list of system functions (like fork, fstat, etc.) in your kernel
Link the library and your kernel together
If you want to use functions like malloc or printf (which uses malloc internally), you need some kind of memory management and simplest working implementation of sbrk.
I strongly recommend against glibc. It is a beast.
Try newlib instead. Porting it to a new kernel is easy. You just need to write a few support functions, as explained here.
Another new kid on the block is musl which specifically aims to improve the situation in embedded space.
It's probably not the best choice for a beginner, though, since it's still pretty much work in progress.
Better look for a small libc, like uClibc. The GNU C library is huge. And as the comments tell, the first step is to get a C compiler going.
What are you trying to do? Building a full operating system is a job for a group of people lasting a few years... better start with something that already works, and hack on the parts that most interest you.
I was using PIC micro controller for my projects. Now I would like to move to ARM based Controllers. I would like to start ARM using Linux (using C). But I have no idea how to start using Linux. Which compiler is best, what all things I need to study like a lot of confusions. Can you guys help me on that? My projects usually includes UART, IIC, LCD and such things. I am not using any RTOS. Can you guys help me?
Sorry for my bad English
Once you put a heavyweight OS like Linux on a device, the level of abstraction from the hardware it provides makes it largely irrelevant what the chip is. If you want to learn something about ARM specifically, using Linux is a way of avoiding exactly that!
Morover the jump from PIC to ARM + Linux is huge. Linux does not get out of bed for less that 4Mb or RAM and considerably more non-volatile storage - and that is a bare minimum. ARM chips cover a broad spectrum, with low-end parts not even capable of supporting Linux. To make Linux worthwhile you need an ARM part with MMU support, which excludes a large range of ARM7 and Cortex-M parts.
There are plenty of smaller operating systems for ARM that will allow you to perform efficient (and hard real-time) scheduling and IPC with a very small footprint. They range form simple scheduling kernels such as FreeRTOS to more complete operating systems with standard device support and networking such as eCOS. Even if you use a simple scheduler, there are plenty of libraries available to support networking, filesystems, USB etc.
The answer to your question about compiler is almost certainly GCC - thet is the compiler Linux is built with. You will need a cross-compiler to build the kernel itself, but if you do have an ARM platform with sufficient resource, once you have Linux running on it, your target can host a compiler natively.
If you truly want to use Linux on ARM against all my advice, then the lowest cost, least effort approach to doing so is perhaps to use a Raspberry Pi. It is an ARM11 based board that runs Linux out of the box, is increasingly widely supported, and can be overclocked to 900MHz
You can also try using the Beagle Bone development board. To start with it has few features like UART I2C and others also u can give a try developing the device driver modules for the hardware.
ARM Linux compilers and build toolchains are provided by many vendors. Below are your options which I know of:
1.ARM themselves in form of their product DS-5 ;
2.Codesourcery now acquired by Mentor graphics. See some instructions to obtain & install, codesourcery toolchain for ARM linux here
3.To first start programming using ARM (C , assembly ) I find this Windows-Cygwin version of ARM linux tool chain very helpfull. Here. These are prebuilt executables which work under Cygwin(A Posix shell layer) on Windows.
4.Another option would be to cross compile gcc/g++ toolchain on Linux for ARM target of your choice. Search and web will have information about how it is done. But this could be a slightly mroe involved and long-winding process.
enjoy ARM'ing.
First, you should question yourself if you really need to program assembly language, most modern compilers are hard to beat when it comes to generating optimized code.
Then if you decide you really need it, you can make life easier for your self by using inline assembler, and let the compiler write the glue code for you, as shown in this wikipedia article.
Then the compiler to use: For free compilers there are practically only two choices: either gcc or clang.
There is also a non free toolchain from arm which when i last tried, 5 years ago, produced about 30% faster code than gcc at the time. I have not used it since.
The latest version of this compiler can be found here
You can also write standalone assembler code in .s files, both gcc and clang can compile .s into .o in the same way you would compile a .c or .cpp file.
Compile
If you are using a STM32 based microcontroller you need to get CMSIS and GNU arm-non-eabi-gcc package installed. Then you need to write your own makefile to pass your c codes into arm gcc compiler.
Programming
For the programming step you need to install openocd and configure that for your specific programmer. You can find a full description on how to do that on my blog
http://bijan.binaee.com/index.php/2016/04/14/how-to-program-cortex-m-under-gnulinux-arch/ and in my GitHub repository.
IDE
I'm using vim with CTags but you can use gEdit with the Shortcut plugin if you need a simpler text editor.
I'm compiling newlib for a bespoke PowerPC platform with no OS. Reading information on the net I realise I need to implement stub functions in a <newplatform> subdirectory of libgloss.
My confusion is to how this is going to be picked up when I compile newlib. Is it the last part of the --target argument to configure e.g. powerpc-ibm-<newplatform> ?
If this is the case, then I guess I should use the same --target when compiling binutils and gcc?
Thank you
I ported newlib and GCC myself too. And i remember i didn't have to do much stuff to make newlib work (porting GCC, gas and libbfd was most of the work).
Just had to tweak some files about floating point numbers, turn off some POSIX/SomeOtherStandard flags that made it not use some more sophisticated functions and write support code for longjmp / setjmp that load and store register state into the jump buffers. But you certainly have to tell it the target using --target so it uses the right machine sub-directory and whatnot. I remember i had to add small code to configure.sub to make it know about my target and print out the complete configuration trible (cpu-manufacturer-os or similar). Just found i had to edit a file called configure.host too, which sets some options for your target (for example, whether an operation systems handles signals risen by raise, or whether newlib itself should simulate handling).
I used this blog of Anthony Green as a guideline, where he describes porting of GCC, newlib and binutils. I think it's a great source when you have to do it yourself. A fun read anyway. It took a total of 2 months to compile and run some fun C programs that only need free-standing C (with dummy read/write functions that wrote into the simulator's terminal).
So i think the amount of work is certainly manageable. The one that made me nearly crazy was libgloss's build scripts. I certainly was lost in those autoconf magics :) Anyway, i wish you good luck! :)
Check out Porting Newlib.
Quote:
I decided that after an incredibly difficult week of trying to get newlib ported to my own OS that I would write a tutorial that outlines the requirements for porting newlib and how to actually do it. I'm assuming you can already load binaries from somewhere and that these binaries are compiled C code. I also assume you have a syscall interface setup already. Why wait? Let's get cracking!