Writing an OS for Motorola 68K processor. Can I emulate it? And can I test-drive OS development? - c

Next term, I'll need to write a basic operating system for Motorola 68K processor as part of a course lab material.
Is there a Linux emulator of a basic hardware setup with that processor? So my partners and I can debug quicker on our computers instead of physically restarting the board and stuff.
Is it possible to apply test-driven development technique to OS development? Code will be mostly assembly and C. What will be the main difficulties with trying to test-drive this? Any advice on how to do it?

I would recommend developing an operating system for the classic Amiga computers, which had different versions of the 68000 processor. Since the Amiga computer is a complete computer and is extremely well documented, I thought this would be a good exercise.
There is an emulator for it called UAE (and Win-UAE) which is very exact and
can be configured with different kinds of processors (68000 - 68060) and other capabilities. Normally, you would also need to acquire ROMs for it, but since you are developing an operating system yourself, this is not necessary.
Tools you will need is either Cygwin (for developing under Windows) or a Linux computer. Then you will need cross compilers. This includes both a C compiler and an assembler. Here is a template for creating a simple ROM which changes screen color and flicks the power LED. It will create a file 'kick.rom' which UAE then searches for in the current directory.
Reference on the 68000 instruction set can be found at the links below. Be aware that different assembler programs may use slightly different syntax and instruction set.
If you need to demo the operating system on real hardware, there are modern Amiga clones sold on Ebay and other places. Search for "Minimig".
Update:
Nowadays AROS also runs on UAE as well as physical Amigas.
Refs:
[UAE]
[WinUAE]
[Cygwin]
[Cross Compilers]
[68000 reference]

I would suggest QEMU for m68k emulation.
(The system emulator you want in QEMU is "Coldfire" - that's what Freescale calls the successor to the m68k architecture).

You certainly can tdd this project. First off decouple all accesses to the hardware with simple routine calls, e.g. getch() and printf, then you can provide simple mocks that provide test input and check output. You can then write well over 90% of the project on a PC using gcc, msdev or xcode. Once you have got some confidence in the decoupling routines you will need very little access to the hardware, and only then to occasionally check that your mocks are acting as you expect.
Keep to C until you find a particular bottle neck, and only then resort to assembler.

There are a few new projects that use hardware simulated 68000 cpus, the C-One project, the Minimig (Mini Amiga) project and the Natami (Native Amiga) project - they are new 68k compatible Amiga systems.
C One, reconfigurable computer, Minimig, in development, prototypes done: FPGA Arcade and Natami.

The Easy68k http://www.easy68k.com simulator might help you.

The uClinux project started on a m68k board. They may have the tools you need...

Related

Is it possible to build C source code written for ARM to run on x86 platform?

I got some source code in plain C. It is built to run on ARM with a cross-compiler on Windows.
Now I want to do some white-box unit testing of the code. And I don't want to run the test on an ARM board because it may not be very efficient.
Since the C source code is instruction set independent, and I just want to verify the software logic at the C-level, I am wondering if it is possible to build the C source code to run on x86. It makes debugging and inspection much easier.
Or is there some proper way to do white-box testing of C code written for ARM?
Thanks!
BTW, I have read the thread: How does native android code written for ARM run on x86?
It seems not to be what I need.
ADD 1 - 10:42 PM 7/18/2021
The physical ARM hardware that the code targets may not be ready yet. So I want to verify the software logic at a very early phase. Based on John Bollinger's answer, I am thinking about another option: Just build the binary as usual for ARM. Then use QEMU to find a compatible ARM cpu to run the code. The code is assured not to touch any special hardware IO. So a compatible cpu should be enough to run all the code I think. If this is possible, I think I need to find a way to let QEMU load my binary on a piece of emulated bare-metal. And to get some output, I need to at least write a serial port driver to bridge my binary to the serial port.
ADD 2 - 8:55 AM 7/19/2021
Some more background, the C code is targeting ARMv8 ISA. And the code manipulates some hardware IPs which are not ready yet. I am planning to create a software HAL for those IPs and verify the C code over the HAL. If the HAL is good enough, everything can be purely software and I guess the only missing part is a ARMv8 compatible CPU, which I believe QEMU can provide.
ADD 3 - 11:30 PM 7/19/2021
Just found this link. It seems QEMU user mode emulation can be leveraged to run ARM binaries directly on a x86 Linux. Will try it and get back later.
ADD 4 - 11:42 AM 7/29/2021
An some useful links:
Override a function call in C
__attribute__((weak)) and static libraries
What are weak functions and what are their uses? I am using a stm32f429 micro controller
Why the weak symbol defined in the same .a file but different .o file is not used as fall back?
Now I want to do some white-box unit testing of the code. And I don't want to run the test on an ARM board because it may not be very efficient.
What does efficiency have to do with it if you cannot be sure that your test results are representative of the real target platform?
Since the C source code is instruction set independent,
C programs vary widely in how portable they are. This tends to be less related to CPU instruction set than to target machine and implementation details such as data type sizes, word endianness, memory size, and floating-point implementation, and implementation-defined and undefined program behaviors.
It is not at all safe to assume that just because the program is written in C, that it can be successfully built for a different target machine than it was developed for, or that if it is built for a different target, that its behavior there is the same.
I am wondering if it is possible to build the C source code to run on x86. It makes debugging and inspection much easier.
It is probably possible to build the program. There are several good C compilers for various x86 and x86_64 platforms, and if your C code conforms to one of the language specifications then those compilers should accept it. Whether the behavior of the result is representative of the behavior on ARM is a different question, however (see above).
It may nevertheless be a worthwhile exercise to port the program to another platform, such as x86 or x86_64 Windows. Such an exercise would be likely to unmask some bugs. But this would be a project in its own right, and I doubt that it would be worth the effort if there is no intention to run the program on the new platform other than for testing purposes.
Or is there some proper way to do white-box testing of C code written for ARM?
I don't know what proper means to you, but there is no substitute for testing on the target hardware that you ultimately want to support. You might find it useful to perform initial testing on emulated hardware, however, instead of on a physical ARM device.
If you were writing ARM code for a windows desktop application there would be no difference for the most part and the code would just compile and run. My guess is you are developing for some device that does some specific task.
I do this for a lot of my embedded ARM code. Typically the core algorithms work just fine when built on x86 but the whole application does not. The problems come in with the hardware other than the CPU. For example I might be using a LCD display, some sensors, and Free RTOS on the ARM project but the code that runs on Windows does not have any of these. What I do is extract important pieces of C/C++ code and write a test framework around it. In the real ARM code the device is reading values from a sensor and doing something with it. In the test code that runs on a desktop the code reads from a data file with fake sensor values and writes its output to a datafile that can be analyzed. This way I can have white box tests for the most complicated code.
May I ask, roughly what does this code do? An ARM processor with no peripherals would be kind of useless. Typically we use the processor to interact with some other hardware like a screen, some buttons, or Bluetooth. It's those interactions that are going to be the most problematic.

What are the conditions to make the embedded C code written for one processor to work on another processor (when architecture is same)?

I am reading a primer text on embedded C programming (it is: Barr & Massa, 2007). For companion hardware board to run examples, they recommend Arcom VIPER-Lite. But I do already have Beaglebone Black (BBB) board and I don't want to buy a new board.
The two boards have same architecture, namely, ARM but BBB uses TI AM3358BZCZ100 processor, clocked at 1GHz, whereas VIPER-Lite uses Intel's PXA255 processor, clocked at 200MHz. The BBB board has more memory and basically more of everything.
My question is, can I follow and execute embedded C code examples given in this book on my BBB board? Does embedded C code depend on processors or architecture or something else? I understand that very specific examples addressing particular peripherals/drivers may not be portable from one platform to next but is entire embedded code like this? I am hope I am making sense.
Intel X-Scale is not the same as Cortex-A8 - ARM architecture has been through a number of versions since then, and Intel implemented some proprietary features too. Moreover ARM licencees are free to implement proprietary peripheral sets and subsets of the core architecture.
In particular for board bring-up the PLL and SDRAM controller will be entirely different between different vendor's devices and even between different generations of device from the same vendor.
If you are running code on an already implemented OS (BeagleBone is delivered with Linux already installed), then you will not need to worry about board bring up and peripheral support; but you will also miss out in learning a great deal about embedded systems (other than perhaps embedded systems that run pre-installed or vendor supplied Linux distros, which is a small subset or all embedded systems).
Beyond board bring up the boards will have entirely different peripheral sets, different on-board devices, and differing I/O at different addresses and with different register sets - no code that directly accesses the I/O is will work. Code accessing devices through a standard Linus device driver interface may well work because an abstraction to a common interface is provided by the OS and board vendor or third party device drivers.
If you are not running the code on Linux - or are implementing low-level device drivers, then the programming environment in terms of memory map, MMU, PLL, I/O control, peripherals, and even instruction set will be different and any code will require adaptation, and you will need to get familiar with the corresponding data sheets or reference manuals and also the ARM technical reference.
So the answer is that it depends largely on where you are starting from; bare-metal or Linux.
There are resources related to "bare-metal" development on BeagleBone Black in particular TI's own bare-metal StarterWare library.
The concept you learn from most good text books can be applied any microprocessor or micro controller at a very high level. But if you want learn embedded system programming using Beagle Bone Black I suggest the following youtube links from Prof Derek Molloy. Prof Molloy does a fantastic job of teaching embedded System programming using BBB. Here are few links for you to get started.
The Beaglebone - Unboxing, Introduction Tutorial and First Example
Beaglebone: C/C++ Programming Introduction for ARM Embedded Linux Development using Eclipse CDT
Beaglebone: GPIO Programming on ARM Embedded Linux
The one problem you might want to be aware is that the video were based on Angstrom Distribution. The current BBB is shipped with Debian Distribution.
Also if you want to learn bare-metal embedded system program you might want check out
Embedded Systems - Shape The World
You might also want to take look at the following link for more material.
Beginning with programming microcontrollers
The xscale although ARM instruction set derived is not ARM in the sense that you want to use it. For some reason the native mode is big endian and normal ARM native mode is little endian. But more important the core processor is not insignificant, but not the bulk if the porting effort, most of not all of the peripherals are expected to differ between those two chips, most of the code would need a re-write unless it is a purely portable C program that runs on any say linux, then arm, xscale, x86 are completely irrelevant to the discussion. I suspect you are not in that situation. Even compiled as a generic command line linux app would still have problems in this situation with the endianness.
Basically you are saying I have two fords and I want to take the wheels off of one and put them on the other, without understanding that one is a ford festiva and the other is lets say an F350 pickup. Just because they have the same looking tiny ford badge on them, doesnt mean that the entirety of their components are identical.
If you are desperate to re-use these binaries, you are better off finding or making a simulator for the prior platform and then you can run that on anything.

Do you have to build a new compiler for a new operating system?

I would like to build an OS some time in the future, and now thinking of some light sketches on how it would be. I have pretty much been coding in C compiled for the Windows environment (and some little Java). I would have to recompile any of my C programs should I want to run it under Linux. So the binaries, the product of compilation, must be different for each operating system. If I design a totally new OS from scratch, for both hobby and academic purpose, without using the Linux kernel or any known base code of an OS, what I understand as to happen is that I cannot compile my C programs with GCC since my OS will not be among its target systems. Here my question written on the title emerges. Thanks in advance for any hints.
It depends. You could easily choose to re-use an existing compiler, such as the immemorial example GCC, and thus you would reap the benefits of an existing compiler. But there are some big provisos that must be cleared up.
Regards of whether or not you choose to build a new compiler, the challenge will remain in porting a C library. You technically can use C without a standard library (which is what the Linux kernel, or any self-hosted example for that matter, has to do, for example) but this is a ridiculous proposition for programs intended to run under an operating system, as most systems impose memory restrictions, etc, meaning that you cannot just have carte blanche in terms of using memory. Thusly, a C library call such as malloc is required.
Since any programs under your kernel (99% of your OS in all likelihood) will need a set of functions to link against, porting a C library is your biggest task. The C library is a huge monolith, and writing your own would be rather silly, especially with many implementations already available, the most well known being GCC's. So, the question you really should be asking is, do you want to write my own version of libc? (The answer is almost always no, and most alternative implementations are for niche use cases.) Plus, if you want to make your OS POSIX-compliant, then you'll have to implement more functions, adding to the hassle.
Whether you write your own compiler for your OS is a minor detail compared to which C library will be included with it. You can always use your own compiler with an already-written implementation of the C library.
My advice to your rather opinion-based question: no. Port an existing compiler such as GCC or clang, and then use that. Plus, that has several advantages:
Compatibility with existing tools and toolchains
A familiar program (no need for your users to learn how to use a new compiler)
They're open source - and in spite of that, you'd be insane to go at it alone. Heck, even Apple integrated two already existing compilers - GCC and clang - into their toolchains rather than do it themselves, and they're a billion-dollar company.
Take a look at this page. It demonstrates how to port GCC to your OS using Newlib as your C library.
No, you can just port an existing compiler. You can even choose an existing executable format, such as ELF, and use your standard GCC + GNU Binutils toolchain. You will need to port the standard library and C runtime, and you will need to write an ELF loader into your operating system.
I suspect the majority of the work will be in porting the C library.
A search turned up this page: Porting GCC to your OS
(1) No, you usually don't have to write your own compiler. Writing a good optimizing compiler can be actually big task which I would better avoid.
But in order to enable writing applications for your OS in some higher level language you will either need to provide
some (2.1) API emulation layer (so that code written and compiled for other OS can be run on your OS)
or you'll have to (2.2) port some existing compiler to your OS
or at least make your OS a new available (2.3) target platform in an existing compiler
or some other option I don't know about
The choices are multiple each with its own pros/cons.
Some examples (other then the obvious GCC already mentioned by #dietrich-epp, #sevenbits) to help you decide which way you want to follow:
(3.1) Free Pascal (see http://www.freepascal.org) compiler can be extended with another target platform
Free Pascal is a 32,64 and 16 bit professional Pascal compiler. It can target multiple processor architectures: Intel x86, AMD64/x86-64, PowerPC, PowerPC64, SPARC, and ARM. Supported operating systems include Linux, FreeBSD, Haiku, Mac OS X/iOS/Darwin, DOS, Win32, Win64, WinCE, OS/2, MorphOS, Nintendo GBA, Nintendo DS, and Nintendo Wii. Additionally, JVM, MIPS (big and little endian variants), i8086 and Motorola 68k architecture targets are available in the development versions
...
Source: http://www.freepascal.org
(3.2) Inferno Operating System (see http://www.vitanuova.com/inferno) has its own application language (see Limbo) with OS specific words, own compiler etc. Applications run in virtual machine (see Dis)
Inferno® is a compact operating system designed for building distributed and networked systems on a wide variety of devices and platforms. With many advanced and unique features, Inferno puts an unrivalled set of tools into your hands...Inferno can run as a user application on top of an existing operating system or as a stand alone operating system...
Source: http://www.vitanuova.com/inferno
(3.3) Squeak (see http://en.wikipedia.org/wiki/Squeak) is a self contained OS with graphics and everything. It uses Smalltalk-80 as the language. Compiler included, applications run in virtual machine (see Cog VM). The VM could be emitted as portable C code and then ported to a bare-bone hardware.
Squeak is a modern, open source, full-featured implementation of the powerful Smalltalk programming language and environment. Squeak is highly-portable, running on almost any platform you could name and you can really truly write once run anywhere. Squeak is the vehicle for a wide range of projects from multimedia applications and educational platforms to commercial web application development...
Source: http://www.squeak.org
(3.4) MenuetOS (see http://www.menuetos.net/) is 64bit OS written in assembly language. Flat Assembler (see FASM) compiler which can emit native binaries was ported to the OS including OS API and is included in basic installation. Later on C library was also ported
MenuetOS is an Operating System in development for the PC written entirely in 32/64 bit assembly language...supports 32/64 bit x86 assembly programming for smaller, faster and less resource hungry applications...Menuet isn't based on other operating system nor has it roots within UNIX or the POSIX standards. The design goal, since the first release in year 2000, has been to remove the extra layers between different parts of an OS, which normally complicate programming and create bugs...
Source: http://www.menuetos.net
(3.5) Google's Android OS (see Wikipedia: Android (operating system)) ported Java Virtual Machine (see Dalvik later replaced by Android Runtime) and provided OS APIs for the Java programming language, reusing existing compilers and IDEs just consuming the produced binaries
Android Runtime (ART) is an application runtime environment used by the Android mobile operating system. ART replaces Dalvik, which is the process virtual machine originally used by Android, and performs transformation of the application's bytecode into native instructions that are later executed by the device's runtime environment...
Source: http://en.wikipedia.org/wiki/Android_Runtime
There are many more useful examples available. Whether you have to or don't have to basically depends on the programming paradigm your new OS will introduce. Why you want to build it and how will it differ from the existing ones.
Examples for no are: (3.1), (3.4), (3.5)
Examples for yes are: (3.2), (3.3)

How to start ARM programming in linux?

I was using PIC micro controller for my projects. Now I would like to move to ARM based Controllers. I would like to start ARM using Linux (using C). But I have no idea how to start using Linux. Which compiler is best, what all things I need to study like a lot of confusions. Can you guys help me on that? My projects usually includes UART, IIC, LCD and such things. I am not using any RTOS. Can you guys help me?
Sorry for my bad English
Once you put a heavyweight OS like Linux on a device, the level of abstraction from the hardware it provides makes it largely irrelevant what the chip is. If you want to learn something about ARM specifically, using Linux is a way of avoiding exactly that!
Morover the jump from PIC to ARM + Linux is huge. Linux does not get out of bed for less that 4Mb or RAM and considerably more non-volatile storage - and that is a bare minimum. ARM chips cover a broad spectrum, with low-end parts not even capable of supporting Linux. To make Linux worthwhile you need an ARM part with MMU support, which excludes a large range of ARM7 and Cortex-M parts.
There are plenty of smaller operating systems for ARM that will allow you to perform efficient (and hard real-time) scheduling and IPC with a very small footprint. They range form simple scheduling kernels such as FreeRTOS to more complete operating systems with standard device support and networking such as eCOS. Even if you use a simple scheduler, there are plenty of libraries available to support networking, filesystems, USB etc.
The answer to your question about compiler is almost certainly GCC - thet is the compiler Linux is built with. You will need a cross-compiler to build the kernel itself, but if you do have an ARM platform with sufficient resource, once you have Linux running on it, your target can host a compiler natively.
If you truly want to use Linux on ARM against all my advice, then the lowest cost, least effort approach to doing so is perhaps to use a Raspberry Pi. It is an ARM11 based board that runs Linux out of the box, is increasingly widely supported, and can be overclocked to 900MHz
You can also try using the Beagle Bone development board. To start with it has few features like UART I2C and others also u can give a try developing the device driver modules for the hardware.
ARM Linux compilers and build toolchains are provided by many vendors. Below are your options which I know of:
1.ARM themselves in form of their product DS-5 ;
2.Codesourcery now acquired by Mentor graphics. See some instructions to obtain & install, codesourcery toolchain for ARM linux here
3.To first start programming using ARM (C , assembly ) I find this Windows-Cygwin version of ARM linux tool chain very helpfull. Here. These are prebuilt executables which work under Cygwin(A Posix shell layer) on Windows.
4.Another option would be to cross compile gcc/g++ toolchain on Linux for ARM target of your choice. Search and web will have information about how it is done. But this could be a slightly mroe involved and long-winding process.
enjoy ARM'ing.
First, you should question yourself if you really need to program assembly language, most modern compilers are hard to beat when it comes to generating optimized code.
Then if you decide you really need it, you can make life easier for your self by using inline assembler, and let the compiler write the glue code for you, as shown in this wikipedia article.
Then the compiler to use: For free compilers there are practically only two choices: either gcc or clang.
There is also a non free toolchain from arm which when i last tried, 5 years ago, produced about 30% faster code than gcc at the time. I have not used it since.
The latest version of this compiler can be found here
You can also write standalone assembler code in .s files, both gcc and clang can compile .s into .o in the same way you would compile a .c or .cpp file.
Compile
If you are using a STM32 based microcontroller you need to get CMSIS and GNU arm-non-eabi-gcc package installed. Then you need to write your own makefile to pass your c codes into arm gcc compiler.
Programming
For the programming step you need to install openocd and configure that for your specific programmer. You can find a full description on how to do that on my blog
http://bijan.binaee.com/index.php/2016/04/14/how-to-program-cortex-m-under-gnulinux-arch/ and in my GitHub repository.
IDE
I'm using vim with CTags but you can use gEdit with the Shortcut plugin if you need a simpler text editor.

Writing cross-platform apps in C

What things should be kept most in mind when writing cross-platform applications in C? Targeted platforms: 32-bit Intel based PC, Mac, and Linux. I'm especially looking for the type of versatility that Jungle Disk has in their USB desktop edition ( http://www.jungledisk.com/desktop/download.aspx )
What are tips and "gotchas" for this type of development?
I maintained for a number of years an ANSI C networking library that was ported to close to 30 different OS's and compilers. The library didn't have any GUI components, which made it easier. We ended up abstracting out into dedicated source files any routine that was not consistent across platforms, and used #defines where appropriate in those source files. This kept the code that was adjusted per platform isolated away from the main business logic of the library. We also made extensive use of typedefs and our own dedicated types so that we could easily change them per platform if needed. This made the port to 64-bit platforms fairly easy.
If you are looking to have GUI components, I would suggest looking at GUI toolkits such as WxWindows or Qt (which are both C++ libraries).
Try to avoid platform-dependent #ifdefs, as they tend to grow exponentially when you add new platforms. Instead, try to organize your source files as a tree with platform-independent code at the root, and platform-dependent code on the "leaves". There is a nice book on the subject, Multi-Platform Code Management. Sample code in it may look obsolete, but ideas described in the book are still brilliantly vital.
Further to Kyle's answer, I would strongly recommend against trying to use the Posix subsystem in Windows. It's implemented to an absolute bare minimum level such that Microsoft can claim "Posix support" on a feature sheet tick box. Perhaps somebody out there actually uses it, but I've never encountered it in real life.
One can certainly write cross-platform C code, you just have to be aware of the differences between platforms, and test, test, test. Unit tests and a CI (continuous integration) solution will go a long way toward making sure your program works across all your target platforms.
A good approach is to isolate the system-dependent stuff in one or a few modules at most. Provide a system-independent interface from that module. Then build everything else on top of that module, so it doesn't depend on the system you're compiling for.
XVT have a cross platform GUI C API which is mature 15+ years and sits on top of the native windowing toollkits. See WWW.XVT.COM.
They support at least LINUX, Windows, and MAC.
Try to write as much as you can with POSIX. Mac and Linux support POSIX natively and Windows has a system that can run it (as far as I know - I've never actually used it). If your app is graphical, both Mac and Linux support X11 libraries (Linux natively, Mac through X11.app) and there are numerous ways of getting X11 apps to run on Windows.
However, if you're looking for true multi-platform deployment, you should probably switch to a language like Java or Python that's capable of running the same program on multiple systems with little or no change.
Edit: I just downloaded the application and looked at the files. It does appear to have binaries for all 3 platforms in one directory. If your concern is in how to write apps that can be moved from machine to machine without losing settings, you should probably write all your configuration to a file in the same directory as the executable and not touch the Windows registry or create any dot directories in the home folder of the user that's running the program on Linux or Mac. And as far as creating a cross-distribution Linux binary, 32-bit POSIX/X11 would probably be the safest bet. I'm not sure what JungleDisk uses as I'm currently on a Mac.
There do exist quite few portable libraries just examples I've worked within the past
1) glib and gtk+
2) libcurl
3) libapr
Those cover nearly every platform and so they are extremly useful tool.
Posix is fine on Unices but well I doubt it's that great on windows, besides we do not have any stuff for portable GUIs there.
I also second the recommendation to separate code for different platforms into different modules/trees instead of ifdefs.
Also I recommend to check beforehand what are the differences in you platforms and how you could abstract them. E.g. this is some OS related stuff (e.g. the annoying CR,CRLF,LF in text files), or hardware stuff. E.g. the previous mentioned posix compability doesnt stop you from
int c;
fread(&c, sizeof(int), 1, file);
But on different hardware platforms the internal memory layout can be complete different (endianess), forcing you to use conversion functions on some of the target platforms.
You can use NAppGUI for both console and desktop apps. The SDK uses ANSI-C and your code will work on Windows/macOS/Linux.
https://www.nappgui.com
It's free and OpenSource.

Resources