RTOS with dynamic update of a thread - arm

I'm looking for an embedded RTOS that supports the dynamic upgrade/replacement of a thread. This should be used to allow the user to perform a network based upgrade of the running application. This should basically work like a bootloader, but without replacing the entire application.
My target architecture is an ARM Cortex-M4 processor, so I am looking for a deeply embedded RTOS such as FreeRTOS.

I'am not sure that I get your question right, but for me it seems that dynamic modules support is what you need. Using this feature you could implement partial-firmware OTA update. If so, you could look at NuttX, RIOT OS or Contiki. It looks like that they all support the requested feature. If you are asking about live update, than you probably should consider Minix 3, but I'am not sure that it fits your target device.

Related

Create a shared library for multiple applications for ARM cortex-m4

I'm trying to create project which contains a drivers library and two separate application (Booltloader + app), now I want to share the drives library between the two apps in order to save space on the flash...
I saw this tutorial for IAR, but I must use Keil uvision5 and I didnt find anything helpful online.
Anyone can guide me through this?
thanks!
Splitting the code into three parts (bootloader, library, application) most likely is too much. I think it is better to combine the bootloader and the drivers in a single binary. While calling the application, the bootloader can provide information necessary to use the drivers.
A word of caution, though: a solution like this is way more tricky than just compiling the drivers in the application. Depending on the driver functions, there may be no true benefit on flash usage. In particular, if many drivers are not necessary, they will just occupy the flash instead of getting optimized-out.

Run executable on MINI2440 with NO OS

I have Fedora installed on my PC and I have a Friendly ARM Mini2440 board. I have successfully installed Linux kernel and everything is working. Now I have some image processing program, which I want to run on the board without OS. The only process running on board should be my program. And in that program how can I access the on board camera to take image from, and serial port to send output to the PC.
You're talking about what is often called a bare-metal environment. Google can help you, for example here. In a bare-metal environment you have to have a good understanding of your hardware because you have to take care of a lot of things that the OS normally handles.
I've been working (off and on) on bare-metal support for my ELLCC cross development tool-chain. I have the ARM implementation pretty far along but there is still quite a bit of work to do. I have written about some of my experiences on my blog.
First off, you have to get your program started. You'll need to write some start-up code, usually in assembly, to handle the initialization of the processor as it comes out of reset (or is powered on). The start-up code then typically passes control to code written in C that ultimately directly or indirectly calls your main() function. Getting to main() is a huge step in your bare-metal adventure!
Next, you need to decide how to support your hardware's I/O devices which in your case include the camera and serial port. How much of the standard C (or C++) library does your image processing require? You might need to add some support for functions like printf() or malloc() that normally need some kind of OS support. A simple "hello world" would be a good thing to try next.
ELLCC has examples of various levels of ARM bare-metal in the examples directory. They range from a simple main() up to and including MMU and TCP/IP support. The source for all of it can be browsed here.
I started writing this before I left for work this morning and didn't have time to finish. Both dwelch and Clifford had good suggestions. A bootloader might make your job a lot simpler and documentation on your hardware is crucial.
First you must realise that without an OS, you are responsible for bringing the board up from reset including configuring the PLL and SDRAM, and also for the driver code for every device on the board you wish to use. To do that required adequate documentation of the board and it devices.
It is possible that you can use the existing bootloader to configure the core and SDRAM, but that may not meet your requirement for the only process running on the board should be your image processing program.
Additionally you will need some means of loading and bootstrapping; again the existing Linux bootstrapper may suit.
It is by no means straightforward and cannot really be described in detail here.

Running applications from freeRTOS

I am currently in the process of developing the OS for a consumer electronics product my company is developing. I have settled on freeRTOS as the backbone for our OS, and am working diligently to implement hardware functionality within the OS. However, I have run into an issue concerning running 3rd-party applications from within freeRTOS.
Originally I considered a task to be an application, where basically you had "myapplication.c" and "myapplication.h" containing all your applications necessary functions and the code would reside within the for(;;) loop within the task (acting as a main while loop). Then when the user decides to run that application, a function pointer is passed to a queue, that my app_launcher task then uses to create the new task using the 3rd-party task or application.
The problem with this approach however, is the OS will already be compiled and reside on the microcontroller, and applications with be installed and deleted as the user sees fit... So obviously applications need to be compiled and executable from the OS. On a standard unix machine, I would use something like fork, to select the executable and give it it's own process. However I cannot find a similar functionality within freeRTOS.. My other idea is approaching a scripting language for app development, but again I'm not sure on how to launch those applications...
So the question is, how do I get freeRTOS to run applications from 3rd party developers that aren't already baked into the OS?
FreeRTOS (and most RTOSes for that matter) do not work like general purpose operating systems (GPOS), they are not generally designed to dynamically load and execute arbitrary user supplied applications. In most case you use an RTOS because you require hard real-time response and the execution of third-party code could compromise that.
Most RTOSes (FreeRTOS included) are no more that static-link libraries, where your entire embedded application is statically linked with the RTOS and executes as a single multi-threaded program.
Again many RTOSes (like FreeRTOS) are not operating systems in the same sense as a GPOS such as Linux. Typically the RTOS services available are the real-time scheduler, inter-process communication (IPC), thread-synchronisation, and timers. Middle-ware such as a file system, and network stack for example are either optional extensions or must be integrated from third-party code.
One problem you will have with FreeRTOS trying to achieve your aim is that a "task" is analogous to a "thread" rather than a "process" in the sense of a GPOS process model. A task typically operates in the same memory space as other tasks with no memory protection between tasks. Tasks are not separate programs, but threads within a single application.
If your target has no MMU then memory protection may be limited in any case, but you may still want third-party applications to be conceptually independent from the OS. If your processor does not have an MMU, then running arbitrary third-party dynamically loaded code may be a problem for system integrity, safety and security. Even with an MMU a simple RTOS kernel such as FreeRTOS won't use it.
Operating systems with real-time scheduling that can load and run application code dynamically as separate processes include:
Windows Embedded Compact (formerly Windows CE)
QNX Neutrino
OS-9
Also VxWorks has the ability to load partially linked object code and dynamically link it to the already loaded code. This is not the same at a process model, but is more akin to a dynamic-link library. What makes it worth mentioning in this context is that the VxWorks shell can invoke any function with external linkage by name. So you can load an object file implementing a function and then run that function. You could in principle implement the same functionality on FreeRTOS, but it is non trivial. The shell is one thing, but dynamic loading and linking requires the application symbol table to be target resident.
If you don't need hard real-time (or your real-time requirements are "soft") and your target has sufficient resources, you may be better served deploying Linux or uClinux which are increasingly used in embedded systems.
If the code your end-users need to run are tightly related to the purpose of your device rather than "general purpose" in nature then another possibility for allowing end-users to run code is to integrate a scripting language interpreter such as Lua. In this case you would simply load the script from a file system and pass it to the script interpreter. For more general purpose requirements a Java VM may be a possibility.
Due to request, here is the work around I found to my problem. The issue was launching other applications from freeRTOS. This was accomplished by utilizing the "System()" function in the newlib library. Thus, I can place an application in flash until it's needed, then launch it using the newlib functions provided. This also allows me to launch programs dynamically, without hard coding the code or name of the application, I just need to provide System() with a string, pointing to the app's location in memory.

How to profile in the Linux kernel or use the perf_event*.[hc] framework?

I have noticed there are some profiling source code under arch/arm/kernel:
perf_event.c
perf_event_cpu.c
perf_event_v6.c
perf_event_v7.c
perf_event_xscale.c
I can't understand the hierarchy of those files and how can I use them? can I assume they are always exists and use them in a kernel module? my kernel module runs on Cortex-A7 or Cortex-A15 cores.
There seems to be a lot of very useful things under /arch/arm/kernel/ directory but no documentation about the capabilities ? how comes ?
Perf_event does provide an API that can be used programmatically, but the documentation is sparse at best. Vince Weaver made the best resource for using the perf_event API here: http://web.eece.maine.edu/~vweaver/projects/perf_events/
He also provides some example code for recording counters.
However your best bet is to use an API that wraps perf_event and makes it more accessible, like PAPI (http://icl.cs.utk.edu/papi/)
EDIT: Since you want to do this from a kernel module, PAPI will not be available. The perf_event API still is, however.
The functionality in the perf_* files is used by/provided for tools like oprofile and perf tools.
And no, they are not ALWAYS available, as there is a config option (CONFIG_PERF_EVENTS) to enable/disable performance measurements.
The functionality is not really meant to be used from another driver. I'm pretty sure that will "upset" any user of oprofile or perf.

Running MPI code in my laptop

I am new to parallel computing world. Can you tell me is it possible to run a c++ code uses MPI routines in my laptop with dual core or is there any simulator/emulator for doing that?
Most MPI implementations use shared memory for communication between ranks that are located on the same host. Nothing special is required in terms of setting up the laptop.
Using a dual core laptop, you can run two ranks and the OS scheduler will tend to place them on separate cores. The WinXP scheduler tends to enforce some degree of "cpu binding" because by default jobs tend to be scheduled on the core where they last ran. However, most MPI implementations also allow for an explicit "cpu binding" that will force a rank to be scheduled on one specific core. The syntax for this is non-standard and must be gotten from the specific implementations documentation.
You should try to use "the same" version and implementation of MPI on your laptop that the university computers are running. That will help to ensure that the MPI runtime flags are the same.
Most MPI implementations ship with some kind of "compiler wrapper" or at least a set of instructions for building an application that will include the MPI library. Either use those wrappers, or follow those instructions.
If you are interested in a simulator of MPI applications, you should probably check SMPI.
This open-source simulator (in which I'm involved) can run many MPI C/C++/Fortran applications unmodified, and forecast rather accurately the runtime of the application, provided that you have an accurate description of your hardware platform. Both online and offline studies are possible.
There is many other advantages in using a simulator to study MPI applications:
Reproducibility: several runs lead to the exact same behavior unless you specify so. You won't have any heisenbugs where adding some more tracing changes the application behavior;
What-if Analysis: Ability to test on platform that you don't have access to, or that is not built yet;
Clairevoyance: you can observe every parts of the system, even at the network core.
For more information, see this presentation or this article.
The SMPI framework can even formally study the correction of MPI applications through exhaustive testing, as shown in that presentation.
MPI messages are transported via TCP networking (there are other high-performance possibilities like shared performance, but networking is the default). So it doesn't matter at all where the application runs as long as the nodes can connect to each other. I guess that you want to test the application on your laptop, so the nodes are all running locally and can easily connect to each other via the loopback network.
I am not quite sure if I do understand your question, but a laptop is a computer just like any other. Providing you have set up your MPI libs correctly and set your paths, you can, of course, use MPI routines on your laptop.
As far as I am concerned, I use Debian Linux (http://www.debian.org) for all my parallel stuff. I have written a little article dealing with HowTo get MPI run on debian machines. You may want to refer to it.

Resources