I want to run a simple hello world, written in c, app.
on my at91sam9rl-ek.
Is it possible without an os?
And (if it is) how do I have to compile it?
-right now I try using g++ lite for creating arm code
(In general which programms can the board start without OS,
assembler, arm code?)
Sure, no problem running without an operating system, I do that kind of thing daily...
http://sam7stuff.blogspot.com/
You programs are, at least at first, not going to resemble desktop applications, I would avoid any libraries C libraries, no printfs or strcmps or things like that until you get the feel for it and find the right tools. No floating point as well. add some numbers do some shifting blink some leds.
codesourcery lite is probably the fastest way to get started, the gnueabi one I believe is the one you want.
This winarm site has a compiler and tons of non-os projects for seems like every arm based microcontroller.
http://www.siwawi.arubi.uni-kl.de/avr_projects/arm_projects/
Atmel is very very good about information, no doubt they have example programs you can try as well on the eval board.
emdebian is another cross compiler that is somewhat up to date and has binaries. building a gcc from scratch for cross compiling is not bad at all. The C library is another story though, and even the gcc library for that matter. I find it easier to do without either library.
It is possible get a C library working and run a great many kinds of programs. Depends on what you are looking to do. Ahh, just looked at the specs, that is a pretty serious eval board, plenty of power for an operating system should you choose to run one. You can certainly run programs that use the display as a user interface. read/write sd cards, usb, basically everything on the board, without an os, if you choose.
Related
I got some source code in plain C. It is built to run on ARM with a cross-compiler on Windows.
Now I want to do some white-box unit testing of the code. And I don't want to run the test on an ARM board because it may not be very efficient.
Since the C source code is instruction set independent, and I just want to verify the software logic at the C-level, I am wondering if it is possible to build the C source code to run on x86. It makes debugging and inspection much easier.
Or is there some proper way to do white-box testing of C code written for ARM?
Thanks!
BTW, I have read the thread: How does native android code written for ARM run on x86?
It seems not to be what I need.
ADD 1 - 10:42 PM 7/18/2021
The physical ARM hardware that the code targets may not be ready yet. So I want to verify the software logic at a very early phase. Based on John Bollinger's answer, I am thinking about another option: Just build the binary as usual for ARM. Then use QEMU to find a compatible ARM cpu to run the code. The code is assured not to touch any special hardware IO. So a compatible cpu should be enough to run all the code I think. If this is possible, I think I need to find a way to let QEMU load my binary on a piece of emulated bare-metal. And to get some output, I need to at least write a serial port driver to bridge my binary to the serial port.
ADD 2 - 8:55 AM 7/19/2021
Some more background, the C code is targeting ARMv8 ISA. And the code manipulates some hardware IPs which are not ready yet. I am planning to create a software HAL for those IPs and verify the C code over the HAL. If the HAL is good enough, everything can be purely software and I guess the only missing part is a ARMv8 compatible CPU, which I believe QEMU can provide.
ADD 3 - 11:30 PM 7/19/2021
Just found this link. It seems QEMU user mode emulation can be leveraged to run ARM binaries directly on a x86 Linux. Will try it and get back later.
ADD 4 - 11:42 AM 7/29/2021
An some useful links:
Override a function call in C
__attribute__((weak)) and static libraries
What are weak functions and what are their uses? I am using a stm32f429 micro controller
Why the weak symbol defined in the same .a file but different .o file is not used as fall back?
Now I want to do some white-box unit testing of the code. And I don't want to run the test on an ARM board because it may not be very efficient.
What does efficiency have to do with it if you cannot be sure that your test results are representative of the real target platform?
Since the C source code is instruction set independent,
C programs vary widely in how portable they are. This tends to be less related to CPU instruction set than to target machine and implementation details such as data type sizes, word endianness, memory size, and floating-point implementation, and implementation-defined and undefined program behaviors.
It is not at all safe to assume that just because the program is written in C, that it can be successfully built for a different target machine than it was developed for, or that if it is built for a different target, that its behavior there is the same.
I am wondering if it is possible to build the C source code to run on x86. It makes debugging and inspection much easier.
It is probably possible to build the program. There are several good C compilers for various x86 and x86_64 platforms, and if your C code conforms to one of the language specifications then those compilers should accept it. Whether the behavior of the result is representative of the behavior on ARM is a different question, however (see above).
It may nevertheless be a worthwhile exercise to port the program to another platform, such as x86 or x86_64 Windows. Such an exercise would be likely to unmask some bugs. But this would be a project in its own right, and I doubt that it would be worth the effort if there is no intention to run the program on the new platform other than for testing purposes.
Or is there some proper way to do white-box testing of C code written for ARM?
I don't know what proper means to you, but there is no substitute for testing on the target hardware that you ultimately want to support. You might find it useful to perform initial testing on emulated hardware, however, instead of on a physical ARM device.
If you were writing ARM code for a windows desktop application there would be no difference for the most part and the code would just compile and run. My guess is you are developing for some device that does some specific task.
I do this for a lot of my embedded ARM code. Typically the core algorithms work just fine when built on x86 but the whole application does not. The problems come in with the hardware other than the CPU. For example I might be using a LCD display, some sensors, and Free RTOS on the ARM project but the code that runs on Windows does not have any of these. What I do is extract important pieces of C/C++ code and write a test framework around it. In the real ARM code the device is reading values from a sensor and doing something with it. In the test code that runs on a desktop the code reads from a data file with fake sensor values and writes its output to a datafile that can be analyzed. This way I can have white box tests for the most complicated code.
May I ask, roughly what does this code do? An ARM processor with no peripherals would be kind of useless. Typically we use the processor to interact with some other hardware like a screen, some buttons, or Bluetooth. It's those interactions that are going to be the most problematic.
I was using PIC micro controller for my projects. Now I would like to move to ARM based Controllers. I would like to start ARM using Linux (using C). But I have no idea how to start using Linux. Which compiler is best, what all things I need to study like a lot of confusions. Can you guys help me on that? My projects usually includes UART, IIC, LCD and such things. I am not using any RTOS. Can you guys help me?
Sorry for my bad English
Once you put a heavyweight OS like Linux on a device, the level of abstraction from the hardware it provides makes it largely irrelevant what the chip is. If you want to learn something about ARM specifically, using Linux is a way of avoiding exactly that!
Morover the jump from PIC to ARM + Linux is huge. Linux does not get out of bed for less that 4Mb or RAM and considerably more non-volatile storage - and that is a bare minimum. ARM chips cover a broad spectrum, with low-end parts not even capable of supporting Linux. To make Linux worthwhile you need an ARM part with MMU support, which excludes a large range of ARM7 and Cortex-M parts.
There are plenty of smaller operating systems for ARM that will allow you to perform efficient (and hard real-time) scheduling and IPC with a very small footprint. They range form simple scheduling kernels such as FreeRTOS to more complete operating systems with standard device support and networking such as eCOS. Even if you use a simple scheduler, there are plenty of libraries available to support networking, filesystems, USB etc.
The answer to your question about compiler is almost certainly GCC - thet is the compiler Linux is built with. You will need a cross-compiler to build the kernel itself, but if you do have an ARM platform with sufficient resource, once you have Linux running on it, your target can host a compiler natively.
If you truly want to use Linux on ARM against all my advice, then the lowest cost, least effort approach to doing so is perhaps to use a Raspberry Pi. It is an ARM11 based board that runs Linux out of the box, is increasingly widely supported, and can be overclocked to 900MHz
You can also try using the Beagle Bone development board. To start with it has few features like UART I2C and others also u can give a try developing the device driver modules for the hardware.
ARM Linux compilers and build toolchains are provided by many vendors. Below are your options which I know of:
1.ARM themselves in form of their product DS-5 ;
2.Codesourcery now acquired by Mentor graphics. See some instructions to obtain & install, codesourcery toolchain for ARM linux here
3.To first start programming using ARM (C , assembly ) I find this Windows-Cygwin version of ARM linux tool chain very helpfull. Here. These are prebuilt executables which work under Cygwin(A Posix shell layer) on Windows.
4.Another option would be to cross compile gcc/g++ toolchain on Linux for ARM target of your choice. Search and web will have information about how it is done. But this could be a slightly mroe involved and long-winding process.
enjoy ARM'ing.
First, you should question yourself if you really need to program assembly language, most modern compilers are hard to beat when it comes to generating optimized code.
Then if you decide you really need it, you can make life easier for your self by using inline assembler, and let the compiler write the glue code for you, as shown in this wikipedia article.
Then the compiler to use: For free compilers there are practically only two choices: either gcc or clang.
There is also a non free toolchain from arm which when i last tried, 5 years ago, produced about 30% faster code than gcc at the time. I have not used it since.
The latest version of this compiler can be found here
You can also write standalone assembler code in .s files, both gcc and clang can compile .s into .o in the same way you would compile a .c or .cpp file.
Compile
If you are using a STM32 based microcontroller you need to get CMSIS and GNU arm-non-eabi-gcc package installed. Then you need to write your own makefile to pass your c codes into arm gcc compiler.
Programming
For the programming step you need to install openocd and configure that for your specific programmer. You can find a full description on how to do that on my blog
http://bijan.binaee.com/index.php/2016/04/14/how-to-program-cortex-m-under-gnulinux-arch/ and in my GitHub repository.
IDE
I'm using vim with CTags but you can use gEdit with the Shortcut plugin if you need a simpler text editor.
a previous relevant question from me is here Reverse Engineering old paint programs
I have set up my base of operations here: http://animatorpro.org
wiki coming soon.
Okay, so now I have a 300,000 line legacy MSDOS codebase. It's sort of a "be careful what you wish for" situation. I am not an experienced C programmer. I'm not entirely inexperienced either, but for all intents and purposes I'm a noob to the language and in particular the intricacies of its libraries. I am especially ignorant of the vagaries of the differences between C programs written specifically for MSDOS and programs that are cross platform. However I have been studying this code base for over a year now, and this is what I know about Animator Pro:
Compilers and tools used:
Watcom C compiler
tcmake (make program from Turbo C)
386asm, a specialised assembler for the Phar Lap dos extender
and of course, the Phar Lap dos extender itself.
a selection of obscure dos utilities
Much of the compilation seems to be driven by batch files. Though I have obtained copies of all these tools, I have not yet succeeded at compiling it. (though I have compiled its older brother, autodesk animator original.
It's got a plugin system that replicates DLL before DLL's were available, based on REX. The plugin system handles:
Video Drivers (with a plethora of included VESA drivers)
Input drivers (including wacom tablets, and keyboards)
Drawing Tools
Inks (Like photoshop's filters, or blending modes)
Scripting Addons (essentially compiled scripts)
File formats
It's got its own script interpreter named POCO, based on the C language- The scripting language has enough power to do virtually all the things the plugin system can do- Just slower.
Given this information, this is my development plan. Please criticise this. The source code is available in the link above, so you can easily, if you are so inclined, assess the situation yourself.
Compile with its original tools.
Switch to using DJGPP, and make the necessary changes to get it to compile with that, plus the original assembler.
Include the Allegro.cc "Game" library, and switch over as much functionality to that library as possible- Perhaps by simply writing new video and input drivers that use the Allegro API. I'm thinking allegro rather than SDL because: there is a DOS version of Allegro, and fascinatingly, one of its core functions is the ability to play Animator Pro's native format FLIC.
Hopefully after 3, I will have eliminated most or all of the Assembler in the project. I say hopefully, because it's in an obscure dialect that doesn't assemble in any modern free assembler without significant modification. I have tried them all. Whatever is left gets converted to assemble in NASM, or to C code if I can define the assembler's actual function.
Switch the dos extender from Phar Lap to HX Dos http://www.japheth.de/HX.html, Which promises to replicate as much of the WIN32 api as possible. Then make all the necessary code changes for that to work.
Switch to the win32 version of Allegro.cc, assuming that the win32 version can run on top of HXDos. Make any further necessary changes
Modify the plugin system to use some kind of standard cross platform plugin library. What this would be, I have no idea. Maybe you can offer some suggestions? I talked to the developer who originally wrote the plugin system, and he said some of the things it does aren't possible on modern OS's because of segmentation restrictions. I'm not sure what this means, but I'm guessing it means all the plugins will need to be rewritten almost from scratch.
Magically, I got all the above done, and we can try and make it run in windows, osx, and linux, whilst dealing with other cross platform niggles like long file names, and things I haven't thought of.
Anyone got a problem with any of this? Is allegro a good choice? if not, why? what would you do about this plugin system? What would you do different? Is this whole thing foolish, and should I just rewrite it from scratch, using the original as inpiration? (it would apparently take the original developer "About a month" to do that)
One thing I haven't covered above is the text/font system. Not sure what to do about that, but Animator Pro has its own custom font format, but also is able to use Postscript Type 1 fonts, and some other formats.
My biggest concern with your plan, in a nutshell: Your approach seems to be to attempt to keep the whole enormous thing working at all times, tweaking the environment ever-further away from DOS. During each tweak to the environment, that means you will have approximately a billion subtle assumptions that might have broken at once, none of which you necessarily understand yet. Untangling them all at once will be incredibly painful.
If I were doing the port, my approach would be to disable as much code as possible to get SOMETHING running in a modern environment, and bring the parts back online, one piece at a time. Write a simple test harness program that loads a display driver and draws some stuff, and compile it for DOS to make sure you understand the interface. Then write some C code that implements the same interface, but with Allegro (or SDL or SFML), and make that program work under Windows or Linux. When the output differs, you have a simple test case to work from.
Your entire job on this port is swapping out implementations of various interfaces and functions with completely new ones. This is a job that unit testing excels at. Don't write any new code without a test of some kind that runs on the old code under DOS! Make your potential problems as small and simple as you possibly can. Port assembly code instead of rewriting it only if you're reasonably confident that it will actually make your job easier (ie, algorithmic stuff that compiles fine with few tweaks under NASM). Don't bite off a bigger piece than you can comfortably fit in your brain at once.
I, for one, look forward to seeing your progress! I think what you're attempting to do is great. Thanks for doing it.
Hmmm - I might approach it by writing an OpenGL video "driver" for it. and todays machines are fast enough with tons of ram that you could do all the pixel specific algorithms on main CPU into a back buffer and it would work. As the "generic" VGA driver just mapped the video buffer to a pointer this would be a place to start. There was a zoom mode in the UI so you can look at the pixels on a high res display.
It is often very difficult to take an existing non-trivial code base that wasn't written with portability in mind - you mention a few - and then try to make it portable. There will be a lot of problems on the way. It is probably a better idea to start from scratch and rewrite the code using the existing code as reference only. If you start from scratch you can leverage existing portable UI solution in your new project like Qt.
A CS course I'm taking online suggests students compile their source code and run tools like valgrind on the OS UNIX. I'm completely new to UNIX, Linux, their tools, and coding in c. I've made some attempts at installing FreeBSD 8.1 on VMWare Player 3.1.3, and even managed to get VMWare Tools running. But the FreeBSD documentation has led me down many dead-ends in accomplishing common tasks i.e. mounting an NFS or USB device. It turns out that the packages I need to make this happen aren't installed or configured, and I don't see any straight answer on how to install them.
So, if I'm using UNIX only as a tool to run gcc, g++, valgrind for this CS course, and these can be run on Linux instead, it seems like I can get the job done faster using Ubuntu Linux.
Can Linux be used to compile and run c code identically on UNIX, if compiled on Linux? Or if not, what are the differences to look for?
Thanks
For the novice-level C programmer such as OP, the difference of environment is negligible. Go ahead with Linux.
I think for purposes of the course you could run your programs and tools on Linux,
but I guess the reason your teacher wants you to use FreeBSD is so that you learn other things besides just coding up your problems
The two should be effectively the same. The only major difference you might see would be due to different versions being used. I would check to see what versions of gcc, g++ and valgrind the teacher is having you use, and make sure that you have the same version running on your install of Linux.
You can also use MinGW or Cygwin. You mentioned VMWare, so I'm guessing you're trying to just get an environment up and running in a windows environment. They both allow you to use the compiler and some of the tools without a full install of a Linux based system. In a CS course they would be more than enough.
The main differences too look for:
Compiling C / C++ is not machine independent. You need to have a small environment to compile on UNIX anyway if you need to submit compiled programs to your professor.
C / C++ is rather portable if you don't use anything that's non-portable. It's very hard to verify that you didn't use something that's different between the two machines, so you may wish to compile on UNIX to verify you didn't let an unavailable library (or an specific to the OS procedure, argument, behavior, bugs, etc.) slip into your code.
The vendor of make between the two machines may differ. This means that while the core of make will operate similarly, certain features might not be available in both. In reality, you probably won't use most of makes extended features, but in a worst case scenario you might opt to maintain multiple Makefiles or limit yourself to a common subset of features.
At the end of the day, it all boils down to what your professor will want. Odds are 95+% that you can do 100% of the work in Linux, but the prof's requirements or grading environment might be such that you will have to copy your code into a UNIX account to build the final "submission" executable. Considering that university UNIX accounts aren't nearly as portable as Linux on a laptop, the cost of the "final verification / porting" to the University computer is likely to be small compared to the convenience of working on your homework more hours than you can manage in a fixed lab.
I'm compiling newlib for a bespoke PowerPC platform with no OS. Reading information on the net I realise I need to implement stub functions in a <newplatform> subdirectory of libgloss.
My confusion is to how this is going to be picked up when I compile newlib. Is it the last part of the --target argument to configure e.g. powerpc-ibm-<newplatform> ?
If this is the case, then I guess I should use the same --target when compiling binutils and gcc?
Thank you
I ported newlib and GCC myself too. And i remember i didn't have to do much stuff to make newlib work (porting GCC, gas and libbfd was most of the work).
Just had to tweak some files about floating point numbers, turn off some POSIX/SomeOtherStandard flags that made it not use some more sophisticated functions and write support code for longjmp / setjmp that load and store register state into the jump buffers. But you certainly have to tell it the target using --target so it uses the right machine sub-directory and whatnot. I remember i had to add small code to configure.sub to make it know about my target and print out the complete configuration trible (cpu-manufacturer-os or similar). Just found i had to edit a file called configure.host too, which sets some options for your target (for example, whether an operation systems handles signals risen by raise, or whether newlib itself should simulate handling).
I used this blog of Anthony Green as a guideline, where he describes porting of GCC, newlib and binutils. I think it's a great source when you have to do it yourself. A fun read anyway. It took a total of 2 months to compile and run some fun C programs that only need free-standing C (with dummy read/write functions that wrote into the simulator's terminal).
So i think the amount of work is certainly manageable. The one that made me nearly crazy was libgloss's build scripts. I certainly was lost in those autoconf magics :) Anyway, i wish you good luck! :)
Check out Porting Newlib.
Quote:
I decided that after an incredibly difficult week of trying to get newlib ported to my own OS that I would write a tutorial that outlines the requirements for porting newlib and how to actually do it. I'm assuming you can already load binaries from somewhere and that these binaries are compiled C code. I also assume you have a syscall interface setup already. Why wait? Let's get cracking!