Cross-platform (microcontroller-PC) algorithm development - c

I was asked to develop a algorithm for network application on C. This project will be developed on Linux for PC and then it will be transferred to a more portable platform, something that will include a microcontroller. There are many microcontroller/companies out there that provide very nice and large libraries for TCP/IP. This software will hold statistics on the network performance.
The whole idea of a cross platform (uC - PC) seems rubbish to me cause eventually the code should be written in a more platform specific way for the microcontroller, but I am not expert to judge anyway.
Is there any clever way of doing this or is there a anyone that did this before? My brainstorming has "Wrapper library" and "Matlab"... Any ideas?
Thx!

I do agree with you to some extent - you do want the target system and the system on which you are developing in the interim should be as close as possible (it is better if they can match). Nevertheless the idea with cross-platform is to get you started with the firmware development while the hardware is being designed. Instead of doing it on Linux - what I would do is to use Embedded OS simulator. Here are the steps
- Step 1: Identify the OS for the Embedded System; make sure that OS has a simulator that runs on PC (Win or Linux) Typical Embedded OS with Simulator include VxWorks, μC/OS-II, QNX, uClinux ... Agreeing on the OS means that the hardware design team knows that the OS is the right match for the hardware that is being designed and there is a consensus that the hardware + OS + Application being designed will meet the requirements of the system that is being developed.
- Step 2: Use this simulator to develop the application until the hardware that is being designed is brought up.
- Step 3: Once the first version of the hardware is ready and has been powered up - you can run your application with minimum changes - mostly likely no changes to the code, but changes to the linker/library being used is likely.
The idea of cross-platform if done correct has immense advantages - it helps remove serializing your project development activities.
Given that you mention it is a TCP/IP application - check for Berkeley Sockets support and you use it. Usually this API should not matter if you are using a Simulator, in the extreme case if you have to change the OS for whatever reason your Berkeley Sockets based application is likely to be better portable.

Just assume you can use the standard BSD socket library (system calls are socket(), bind(), accept(), connect(), recv(), send(), with various options). Any OS with a TCP/IP stack will support this standard API.
There may be some caveats that you will run into if your embedded system uses a run to completion type TCP/IP stack like *u*IP, but those will be easily solvable.
Also only use POSIX file I/O (fopen, fread, fwrite, printf, etc). But keep in mind your target may not have a filesystem.

If using a simulator was not an option I would try to wrap the Linux functions up in interfaces that match those of the embedded system, if possible. That way any extra bulk in the system will be on the Linux development system (which is not resource constrained). Various embedded OSes and TCP/IP stacks can have vastly different architectures, so how easy this is can range from nearly impossible to no work at all.
If it turns out that writing wrapper libraries to make Linux look like the embedded system is too difficult then I suggest at least trying to keep the embedded OS in mind while writing the Linux version so that you can try to at least write some functions so that they work on both systems.
If it doesn't take too long writing a Linux version of at least part of the code may help you to shake out a few flaws in the overall design, at the very least. At most it will allow you to more quickly test changes to the system since loading code onto an embedded device often takes more time than you would like. It may also be easier to debug on your development machine.
Some embedded OSes will run on x86, and it would not surprise me if some of them have drivers that allow them to be run in virtual machines, so this may be an option as well.
Another thing to consider is the endian-ness and the word size of the development machine verses the embedded system. If these differ then you need to keep this in mind as you code. Getting this type of thing right when you originally write the code is easier than going back and trying to fix code, in my opinion.

Related

Why do we need a RTOS on ARM Cortex-M

If we can already execute C programs on cortex-m like micro-controllers, Why do we even need to install RTOS (or other operating systems).?
What benefits it can provide if micro-controller is intended to be multi-purpose.?
No you dont need an RTOS only if you need/want the features of the (particular) RTOS. You can program the microcontroller the way you/we always have without one if you prefer.
Typical things an RTOS might bring,
Memory management (who owns memory)
Interrupt handling support
Scheduling (pre-emptive or co-operative)
Usually several drivers in a BSP for your hardware/SOC
Debug tools
Some sort of shell
File systems
IPC (inter-process communitation)
A tool suite
A build environment
Memory protection
Networking
Your application may or may not need these features depending on your end goal. Some of them may be detrimental to your organizations work flow (like the tool suite and build environment). As a product matures, you may end up needing features you didn't account for.
However, a completely custom solution will probably have a smaller foot print. The race conditions involved in interrupt handling can be quite difficult to get right. Probably most RTOS will give a better implementation than something custom that evolves over time. If you are very dedicated, a state machine with polling of devices can be more optimal (hard real time) but again it is difficult to get right.
If the RTOS is BSD (or other permissive) licensed , it maybe possible to reuse the driver code to your own custom infra-structure. At some point your code may become an 'RTOS' of sorts. There are many to choose from.
POSIX compliance is a common standard. If you confine your code to POSIX, you are portable to many different RTOS/OS. However, most often an API that is more rich than POSIX; it is one way they differentiate each other. You may be able to use more 3rd party libraries if the RTOS is POSIX compliant.
An operating system provides a level of abstraction between the code written by an application programmer and the actual hardware the program runs on.
So you don't have to worry, as an application programmer, about the details of the hardware, as they are handled by drivers.
And thus you can compile the same program for many different hardware platforms, if they run the same (or a compatible) operating system.

Can anyone suggest an open-source real-time network stack?

I need to integrate a network stack into my embedded application. It can be a cross-platform real-time network stack written on C. The application is based on ARM7 processor and FreeRTOS kernel.
For example I would use TRECK (Treck Inc.) or Fusion (Unicoi Systems) real-time network stacks if they were free. Also I know that, for instance, there are ports of FreeBSD's and OpenBSD's network stacks to eCos operating systems, but is it possible to obtain them as a stand-alone package so that it's relatively easy to integrate them? Although I suspect they are not real-time.
Please, do not suggest me to change the OS for my application to one that has a built-in network stack. :-)
Why, I've found at least uIP and lwIP open-source network stacks. But they appear to be not real-time.

How do Video Game Emulators Work? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I am curious as to how emulators work. What are they written in? Does it have to emulate even the graphics? How do people get the games uploaded as roms? Do they simulate the systems OS?
There are several emulation techniques. The first technique is called low level emulation. The emulator in this case can be written in practically any langauge, however because of the large amount of binary data manipulation, C and C++ lend themselves well to such a task, though there are plenty of other languages that are capable of providing such.
With low level emulation the program simulates the exact hardware of the original system. For example, the original NES has well defined hardware both from official documentation and information from reverse engineering. We know exactly how its 6502-based CPU behaves along with the graphics, sound chips, etc. With low level emulation, the exact binary data of the original game is interpreted in software in exactly the same way that the original hardware interprets the data. This includes the original machine code written for the 6502 instruction set, the graphics data, the IO, everything. The graphics and sound hardware are emulated by translating instructions for the original hardware into modern hardware by calling modern graphics and sound APIs to fulfill them.
This technique is the most accurate and successful but is also the slowest and sometimes the most difficult to implement for complex machines.
The second method is called static recompilation. The original machine code for the original system is analyzed and then recompiled for a modern computer. This technique produces the fastest emulation but has a really low rate of success. Emulators employing this technique could, at best, only support a few demos and games. The reason why is that often the runtime environment that the original software expects changes in such a way that is hard or impossible to know at compile time.
The final technique is called dynamic recompilation. In this technique the emulator analyzes the code and recompiles it as it is running. This allows the compiler to tailor the runtime environment to what the original software expects based on information available as the program is running.
Involved in most forms of recompilation techniques is something called High Level Emulation. This is the observation that most code is simply code compiled to call operating system or library C routines. The code is recompiled to the host machine, and the calls to the original operating system and libraries, such as those for graphics and sound, are reimplemented natively instead of being emulated. For example, if there is a call to draw a triangle on the screen, the emulator can simply perform the operation directly without having to emulate the exact low level implementation of communicating the draw command to the original graphics hardware. This is how almost all Nintendo 64 and PlayStation emulators and work.
The original operating systems only sometimes need to be re-implemented. For example, the Nintendo 64 actually didn't have an operating system, each cartridge was its own OS per se. The emulator, however, recognized common routines that all ROMs implemented and dynamically captured and reimplemented them. The playstation, however, had a proprietary BIOS used for setting up the basic hardware and reading the game from the CD. Emulators have to have a copy of this BIOS or attempt to reimplement its functionality.
We know that emulators using dynamic recompilation have been implemented inside, for example, the Xbox 360 in order to play original Xbox games. Such a task would be very difficult for outside developers, but simpler for Microsoft who has all of the original and proprietary documentation and the manpower to create and optimize such an emulator. In this case, the entire original Xbox operating system does not need to be emulated, however the calls that the original games make to the original operating system have to be translated into the native operating system. The technique for the Xbox One to emulate the Xbox 360 is similar, except in order to have a greater degree of compatibility with Xbox 360 titles in emulation they chose to run the original Xbox 360 operating system in their emulator.
Games from game cartridges are moved onto a computer through hardware which is specially designed for ROM dumping. ROMs on the older machines actually behave in a really simple manner. They have address input lines and data output lines. A device can be constructed using a micro controller to dump these ROMs and then transfer them to a computer using Serial, USB or some other method. Some ROMs can even be read through a computer's programmable Parallel port, largely missing in modern PCs but USB adapters for them exist.
Because of the massive amounts of dynamic code generation, emulators that use recompilation techniques almost exclusively use C or C++, however any language capable of systems programming and low level code interfacing at run-time is capable of doing this.

Threading in C, cross platform

I am dealing with an existing project (in C) that is currently running on a single thread, and we would like to run on multiple platforms AND have multiple threads. Hopefully, there is a library for this, because, IMHO, the Win32 API is like poking yourself in the eye repeatedly. I know about Boost.Thread for C++, but, this must be C (and compilable on MinGW and gcc). Cygwin is not an option, sorry.
Try OpenMP API, it's multi-platform and you can compile it with GCC.
Brief description from the wikipedia:
OpenMP (Open Multi-Processing) is an application programming interface
(API) that supports multi-platform shared memory multiprocessing
programming in C, C++, and Fortran,[3] on most platforms, processor
architectures and operating systems, including Solaris, AIX, HP-UX,
Linux, macOS, and Windows. It consists of a set of compiler
directives, library routines, and environment variables that influence
run-time behavior.
I would use the POSIX thread API - pthread. This article has some hints for implementing it on Windows, and a header-file-only download (BSD license):
http://locklessinc.com/articles/pthreads_on_windows/
Edit: I used the sourceforge pthreads-win32 project in the past for multi-platform threading and it worked really nicely. Things have moved on since then and the above link seems more up-to-date, though I haven't tried it. This answer assumes of course that pthreads are available on your non-Windows targets (for Mac / Linux I should think they are, probably even embedded)
Windows threading has sufficiently different functionality when compared to that of Linux such that perhaps you should consider two different implementations, at least if application performance could be an issue. On the other hand, simply implementing multi-threading may well make your app slower than it was before. Lets assume that performance is an issue and that multi-threading is the best option.
With Windows threads I'm specifically thinking of I/O Completion Ports (IOCPs) which allow implementing I/O-event driven threads that make the most efficient use of the hardware.
Many "classic" applications are constructed along one thread/one socket (/one user or similar) concept where the number of simultaneous sessions will be limited by the scheduler's ability to handle large numbers of threads (>1000). The IOCP concept allows limiting the number of threads to the number of cores in your system which means that the scheduler will have very little to do. The threads will only execute when the IOCP releases them after an I/O event has occurred. The thread services the IOC, (typically) initiates a new I/O and returns to wait at the IOCP for the next completion. Before releasing a thread the IOCP will also provide the context of the completion such that the thread will "know" what processing context the IOC belongs to.
The IOCP concept completely does away with polling which is a great resource waster although "wait on multiple object" polling is somewhat of an improvement. The last time I looked Linux had nothing remotely like IOCPs so a Linux multi-threaded application would be constructed quite differently compared to a Windows app with IOCPs.
In really efficient IOCP apps there is a risk that so many IOs (or rather Outputs) are queued to the IO resource involved that the system runs out of non-paged memory to store them. Conversely, in really inefficient IOCP apps there is a risk that so many Inputs are queued (waiting to be serviced) that the non-paged memory is exhausted when trying to temporarily buffer them.
If someone needs a portable and lightweight solution for threading in C, take a look at the plibsys library. It provides you thread management and synchronization, as well as other useful features like portable socket implementation. All major operating systems (Windows, Linux, OS X) are supported, various other less popular operating systems are also supported (i.e. AIX, HP-UX, Solaris, QNX, IRIX, etc). On every platform only the native calls are used to minimize the overheads. The library is fully covered with Unit tests which are run on a regular basis.
glib threads can be compiled cross-platforms.
The "best"/"simplest"/... answer here is definitely pthreads. It's the native threading architecture on Unix/POSIX systems and works almost as good on Windows. No need to look any further.
Given that you are constrained with C. I have two suggestions:
1) I have a seen a project (similar to yours) that had to run on Windows and Linux with threads. The way it was written was that it (the same codebase) used pthreads on Linux and win32 threads on Windows. This was achieved by a conditional #ifdef statement wherever threads needed to be created such as
#ifdef WIN32
//use win32 threads
#else
//use pthreads
#endif
2) The second suggestion might be to use OpenMP. Have you considered OpenMP at all?
Please let me know if I missed something or if you want more details. I am happy to help.
Best,
Krishna
From my experience, multi threading in C for windows is heavily tied to Win32 APIs. Other languages like C# and JAVA supported by a framework also tie into these core libraries while offering their thread classes.
However, I did find an openthreads API platform on sourceforge which might help you:
http://openthreads.sourceforge.net/
The API is modeled with respect to the Java and POSIX thread standard,
I have not tried this myself as I currently do not have a need to support multiple platforms on my C/C++ projects.

input and output without a library in C

I'm writing a small kernel for my programs in C.
This is not (at the moment) an OS kernel, it's merely a way for me to keep track of input and output in programs without relying on external source (i.e. stdio.h). You might ask me why I'd ever want to do this; it's just so I know how this works, and so that I have more, and more (end goal is total) control of program flow.
I was wondering if anyone knows some tutorials on input and output in C (with inline asm?) without relying on any other code.
There is a lot of room between the bare metal and stdio. You have said you aren't writing an OS kernel, but not whether or not you are running under an OS.
Running directly on hardware without an OS, you will still want to encapsulate all of your I/O operations in a module, even if you don't formally define a device driver interface and framework for all of your I/O modules to follow. This is hugely architecture dependent, and makes you responsible for knowing all of the details of interaction with every I/O device you might ever use. For some devices, this can quickly become a huge development effort. That isn't a problem for embedded systems, but running on commercial hardware this way is neither easy nor recommended.
Running within an OS, you probably don't get (and shouldn't want to get) access to the actual hardware registers and interrupts. If you are developing a custom I/O device, the best practice is to make it conform to existing standards so that you need as little low level custom software for it as possible. This is why you see a lot of custom user interface gadgets connecting via USB and identifying themselves as HIDs (Human Interface Devices). As a HID, the existing USB drivers take care of the physical layer, and the OS-supplied HID driver takes care of the logical interface, providing a very simple high level access API to the application.
One of the operating system's key roles is to provide a consistent I/O API across all devices. Generally, that takes the form of open(), close(), read(), write(), and ioctl() functions (the names vary, but some form of at least the first four will always exist). The OS layer is quite raw, however. Typically, an OS call is forwarded without much processing to a device driver, which then forwards the data on to the device. Usually, the OS low level calls block the caller until they complete, and often they have restrictions on the sizes of the buffers that make sense. For instance, raw access to a disk device is usually required to be for an integral number of disk blocks at a time.
And don't forget about things like file systems and network protocols... all of which are made much more reliable and compatible by encapsulation within an operating system.
Even if it is acceptable to call read() and write() for single characters, that is usually not the best performance possible. Operating system calls are relatively expensive, and if you can read multiple characters in a single call, your performance can go way up.
That is the origin of the stdio library for C, and various other buffering libraries in other environments. The stdio library provides a buffering layer that isolates the C code from the block size of the underlying hardware. Even on an entirely home-grown operating system where you have full control over all the devices, something like C stdio will still be valuable.
Writing your own stdio replacement is a highly valuable exercise, even if you don't use it in production code, and is one I would recommend to anyone wanting to learn about what really goes on between printf() and scanf() and the terminal or files.
One valuable resource is the book The Standard C Library by P.J. Plauger. In it, the author presents an implementation of the complete C runtime library specified in the ANSI standard. His discussion of the specific implementation choices he made is valuable and apropos to the context of this question, and the discussions of why some of the standard library features were specified is interesting as well.
This sort of thing is very architecture specific. To put it simply, your I/O devices will raise hardware interrupts to the CPU. The CPU will call the code associated with the interrupt which will deal with it appropriately; for an input device it will fetch the data that is available from the device, for an output device the interrupt usually means that the device is ready to send the next piece.
The old 8088/8086 CPU architecture is a nice simple place to start to get your head around this. Typically, the BIOS would be where the hardware interrupts would have been handled, but it was always possible to write your own. ;)

Resources