I am currently in the process of developing the OS for a consumer electronics product my company is developing. I have settled on freeRTOS as the backbone for our OS, and am working diligently to implement hardware functionality within the OS. However, I have run into an issue concerning running 3rd-party applications from within freeRTOS.
Originally I considered a task to be an application, where basically you had "myapplication.c" and "myapplication.h" containing all your applications necessary functions and the code would reside within the for(;;) loop within the task (acting as a main while loop). Then when the user decides to run that application, a function pointer is passed to a queue, that my app_launcher task then uses to create the new task using the 3rd-party task or application.
The problem with this approach however, is the OS will already be compiled and reside on the microcontroller, and applications with be installed and deleted as the user sees fit... So obviously applications need to be compiled and executable from the OS. On a standard unix machine, I would use something like fork, to select the executable and give it it's own process. However I cannot find a similar functionality within freeRTOS.. My other idea is approaching a scripting language for app development, but again I'm not sure on how to launch those applications...
So the question is, how do I get freeRTOS to run applications from 3rd party developers that aren't already baked into the OS?
FreeRTOS (and most RTOSes for that matter) do not work like general purpose operating systems (GPOS), they are not generally designed to dynamically load and execute arbitrary user supplied applications. In most case you use an RTOS because you require hard real-time response and the execution of third-party code could compromise that.
Most RTOSes (FreeRTOS included) are no more that static-link libraries, where your entire embedded application is statically linked with the RTOS and executes as a single multi-threaded program.
Again many RTOSes (like FreeRTOS) are not operating systems in the same sense as a GPOS such as Linux. Typically the RTOS services available are the real-time scheduler, inter-process communication (IPC), thread-synchronisation, and timers. Middle-ware such as a file system, and network stack for example are either optional extensions or must be integrated from third-party code.
One problem you will have with FreeRTOS trying to achieve your aim is that a "task" is analogous to a "thread" rather than a "process" in the sense of a GPOS process model. A task typically operates in the same memory space as other tasks with no memory protection between tasks. Tasks are not separate programs, but threads within a single application.
If your target has no MMU then memory protection may be limited in any case, but you may still want third-party applications to be conceptually independent from the OS. If your processor does not have an MMU, then running arbitrary third-party dynamically loaded code may be a problem for system integrity, safety and security. Even with an MMU a simple RTOS kernel such as FreeRTOS won't use it.
Operating systems with real-time scheduling that can load and run application code dynamically as separate processes include:
Windows Embedded Compact (formerly Windows CE)
QNX Neutrino
OS-9
Also VxWorks has the ability to load partially linked object code and dynamically link it to the already loaded code. This is not the same at a process model, but is more akin to a dynamic-link library. What makes it worth mentioning in this context is that the VxWorks shell can invoke any function with external linkage by name. So you can load an object file implementing a function and then run that function. You could in principle implement the same functionality on FreeRTOS, but it is non trivial. The shell is one thing, but dynamic loading and linking requires the application symbol table to be target resident.
If you don't need hard real-time (or your real-time requirements are "soft") and your target has sufficient resources, you may be better served deploying Linux or uClinux which are increasingly used in embedded systems.
If the code your end-users need to run are tightly related to the purpose of your device rather than "general purpose" in nature then another possibility for allowing end-users to run code is to integrate a scripting language interpreter such as Lua. In this case you would simply load the script from a file system and pass it to the script interpreter. For more general purpose requirements a Java VM may be a possibility.
Due to request, here is the work around I found to my problem. The issue was launching other applications from freeRTOS. This was accomplished by utilizing the "System()" function in the newlib library. Thus, I can place an application in flash until it's needed, then launch it using the newlib functions provided. This also allows me to launch programs dynamically, without hard coding the code or name of the application, I just need to provide System() with a string, pointing to the app's location in memory.
Related
I am curious about how statically linked C executables would work in different environments. Lets say we compile our C code to target x86 MacOs and we statically include everything it uses in the executable as well (print, strlen). What really stops this executable from running in a Windows OS if we include every library it needs? I understand the file format could be different and break but other than that would this technically be able to run?
I see where you're coming from, operating systems makes us think as programmers that libraries are the be-all end-all of programming, that a call to a library is all you need to make complex things happen and that everything is contained in them.
But the truth is, libraries mostly provide provide an abstraction layer. As an exemple let's create a library called "hello_world.so" which prints "Hello World!" to the console. That library we created relies on stdio to handle the complex I/O stuff but stdio itself depends on at least one other thing: the kernel (some specific targets work without a kernel but these system are outside the scope of this answer).
In the desktop world, things can get really complicated, we have several hundreds of processes all running at once even in an idle system, all these apps need access to the hardware (possibly at once too) so it was decided a controller was needed, some piece of software that would coordinate all other software running on the same computer. This piece of software is usually called a kernel. On Windows it's the NT kernel, on macOS it's the XNU and on Linux it's... the Linux kernel!
On these systems, the biggest job of a library is to abstract the kernel, to make us believe printing text on a Linux or a Windows console works the exact same way when actually it can be completely different! Libraries like stdio/time/etc have different "implementations" but the same "interface": they look the same from the dev point of view but the way they achieve their goals can vary wildy, they can do conversions, calls to other hidden or non hidden functions... All this is completely portable from one OS to the other though, things start to go south for you idea when kernel calls start to show up.
Kernel calls are ways a program can "talk" to the kernel. They can be used to do literally thousands of different things but for example there's one (or several ones) to ask for memory (usually this is called with malloc), one to print to the console, one to ask if a network packet arrived, on to ask to talk to your GPU... And these system calls are completely different from one kernel to the other, sometimes even for two versions of the same kernel!
These "kernel calls" are the only thing preventing you from running statically-compiled linux programs on Windows.
PS: Even though all the above is completely true and kernels can be as different from one another as they wish, due to the history of kernels and of computing in general, some kernels actually share the same interface (even though their implementation as you guessed, can be nothing alike). The best example I know of is how most kernels I know of are based on the UNIX kernel.
It means that -even though I have never tested it myself- I think you should be able to statically link a Linux app and use it on Linux, most BSDs and possibly even macOS
The binary and libraries are specific to the operating system.
The TLDR is that the linking process translates function calls into adresses that points of the Operating System's specific libraries. There are some differences like alignment that happens at compile time, but the responsible for your x86 instructions not running under a different OS is the linker.
Your compiler produces x86 instructions that are ready to execute but is incomplete. The linker will go into every function call give a adress for that function in the executable file, even for functions of the standard library.
The linker will create the executable file following a file format which have a header with information like size, metadata and entry point.Windows and Unix have differents specifications for executable files. Windows has PE and Unix has ELF format both for executables and libraries.
Through some hacks and non-trivial tricks it is possible to create an executable that can run on Windows and Unix (see αcτµαlly pδrταblε εxεcµταblε for how).
But even if you do all that there is an obstacle that can not be circumvented: the kernel. The kernel is, well, a kernel. It's the most important thing in
a OS and it provides a set of API calls that provides basic and low-level access to computer resources, so functions like malloc are implemented using the kernel specific API call, VirtualAlloc for Windows and vmalloc or mmap for POSIX.
Main Answer
If your program does anything useful (print output, return a value, communicate on the network), it contains some form of system-call instruction. Each system-call instruction is a request to a particular operating system, and macOS system calls will not work on Windows and vice-versa.
The system-call instruction sends information to the operating system, including a number identifying which service is requested. The operating system that performs that service. When you build your program for macOS, it includes library routines that contain system-call instructions. If you execute those instructions on a Microsoft Windows system, Windows will not understand the macOS requests. It will interpret the information differently, and the program will not work.
So, in theory, there is nothing preventing you from writing your own program loader that reads an ELF executable file intended for macOS, loading its contents into memory, and transferring control to its entry point. But the program will not work because of the system calls.
Supplement
You might consider translating all the system calls in the program. Changing the primary number that identifies the service request might be feasible; it might not require changing the executable too much. For example, if 37 is a “write to file” request on macOS, your program loader might change it to 48 on Windows. However, the system calls also require other data be passed, such as pointers to buffers, lengths, and so on, and there are likely many discrepancies in how those are passed, so that macOS requests cannot be easily translated into Windows requests. Also, it can be technically challenging to identify all places in a program that a certain instruction is used—some of the contents of memory of a loaded program are instructions and some are data. Most normal programs may be well-behaved and easy to analyze in this regard, but not all are.
Another potential issue is that programs may expect to have certain modes set in the processor, and the host operating system may or may not have set those modes as needed.
I have an application, written in C, which runs on multiple platforms: Windows and various flavors of Unix. The two most important are Linux, then Windows.
One of the core algorithms could benefit from CUDA acceleration. However, everything must run normally if the system isn't CUDA-capable, (or the user doesn't specifically ask for GPGPU acceleration).
So, I need to make sure the application tries to load the CUDA libraries only under the right circumstances.
On Windows, the delay-load mechanism makes this fairly easy.
Is there a similarly easy mechanism to do this on Linux? Or do I have to go through the contortions involved with dlopen()?
You have two choices, both perfectly valid:
As you note in your question, use dlopen to load dynamic libraries at runtime
Statically link the CUDA toolkit libraries into your application. NVIDIA have been shipping static versions of the toolkit libraries for linux for several major release cycles, and static linking is fully supported. The downside of this approach is application size.
I have been asked to write a program in C which should debug another program in C language and then store the value of each variable of every line,loop or function in a log file.
I have been searching over the internet and I found articles on debugging using gdb.
Can I somehow use GDB in my program for this purpose and then store the values of each variable line by line.
I've got basic knowledge of C/C++ so please reply in simple terms.
Thanks
Debuggers depend on some special capability of the hardware, which must be exposed by the operating system (if any).
The basic idea is that the hardware is configured to transfer control to a debugger stub either after every instruction of the target program, or after certain types of instructions such system calls, or those meeting a hardware breakpoint condition. Typically this would look like an interrupt, supervisor exception, or the like - very platform-specific detail.
As mentioned in comments, on Linux, you use the ptrace functionality of the kernel to interact with the debugger support provided by the hardware and the kernel, abstracting away a lot of the hardware-unique detail and managing the permission issues. Typically you must either be the same user id as the process being debugged, or be the superuser (root). Linux's ptrace also gives you an indirect ability to do to things like access the memory (literally, address space) of the target application, something critical to debugger functionality which you cannot ordinarily do from another user-mode program on a multitasking operating system.
Other operating systems will have different methods. Some embedded targets use debug pods which connect your development machine to the embedded board by a few wires. In other cases, debug capability built into the hardware is managed by a small program running on the target processor, which then talks back over a serial or network port to the full debugger program residing on the development machine.
A program such as GDB can do more than just the basics of setting debug stop conditions, dumping registers, and dumping program instructions. Much of its code deals with annotating what it displays based on debug metadata optionally left behind by compilers, walking back through stack frames, and giving the user powerful tools to configure all of this - and of course it does most of this in a target-independent way, with the target-unique code mostly confined to a few interchangeable directories.
You can indeed "drive" GDB from another program - many, many GUI type debuggers do exactly that, existing as graphical front ends for GDB. However, if you were assigned to write a debugger, doing it that way may or may not by consistent with your assignment.
I'm looking to make a custom filesystem for a project I'm working on. Currently I am looking at writing it in Python combined with fusepy, but it got me wondering how a compiled non-userspace filesystem is made in Linux. Are there specific libraries that you need to work with or functions you need to implement for the mount command to work properly. Overall I'm not sure how the entire process works.
Yup you'd be programming to the kernel interfaces, specifically the VFS layer at a minimum. Edit Better link [1]
'Full' documentation is in the kernel tree: http://www.mjmwired.net/kernel/Documentation/filesystems/vfs.txt. Of course, the fuse kernel module is programmed to exactly the same interface
This, however, is not what you'd call a library. It is a kernel component and intrinsically there, so the kernel doesn't have to know how a filesystem is implemented to work with one.
[1] google was wrong: the first hit wasn't the best :)
If you'd like to write it in Python, fuse is a good option. There are lots of tutorials for this, such as the one here: http://sourceforge.net/apps/mediawiki/fuse/index.php?title=FUSE_Python_tutorial
In short: Linux is a monolithic kernel with some module-loading capabilities. That means that every kernel feature (filesystems, scheduler, drivers, memory management, etc.) is part of the one big program called Linux. Loadable modules are just a specialized way of run-time linking, which allows the user to pick those features as needed, but they're all still developed mostly as a single program.
So, to create a new filesystem, you just add new C source code files to the kernel code, defining the operations your filesystem has to perform. Then, create an initialization function that allocates a new instance of the VFS structure, fills it with the appropriate function pointers and registers with the VFS.
Note that FUSE is nothing more than a userlevel accessible API to do the same, so the FUSE hooks correspond (roughly) to the VFS operations.
I am new to parallel computing world. Can you tell me is it possible to run a c++ code uses MPI routines in my laptop with dual core or is there any simulator/emulator for doing that?
Most MPI implementations use shared memory for communication between ranks that are located on the same host. Nothing special is required in terms of setting up the laptop.
Using a dual core laptop, you can run two ranks and the OS scheduler will tend to place them on separate cores. The WinXP scheduler tends to enforce some degree of "cpu binding" because by default jobs tend to be scheduled on the core where they last ran. However, most MPI implementations also allow for an explicit "cpu binding" that will force a rank to be scheduled on one specific core. The syntax for this is non-standard and must be gotten from the specific implementations documentation.
You should try to use "the same" version and implementation of MPI on your laptop that the university computers are running. That will help to ensure that the MPI runtime flags are the same.
Most MPI implementations ship with some kind of "compiler wrapper" or at least a set of instructions for building an application that will include the MPI library. Either use those wrappers, or follow those instructions.
If you are interested in a simulator of MPI applications, you should probably check SMPI.
This open-source simulator (in which I'm involved) can run many MPI C/C++/Fortran applications unmodified, and forecast rather accurately the runtime of the application, provided that you have an accurate description of your hardware platform. Both online and offline studies are possible.
There is many other advantages in using a simulator to study MPI applications:
Reproducibility: several runs lead to the exact same behavior unless you specify so. You won't have any heisenbugs where adding some more tracing changes the application behavior;
What-if Analysis: Ability to test on platform that you don't have access to, or that is not built yet;
Clairevoyance: you can observe every parts of the system, even at the network core.
For more information, see this presentation or this article.
The SMPI framework can even formally study the correction of MPI applications through exhaustive testing, as shown in that presentation.
MPI messages are transported via TCP networking (there are other high-performance possibilities like shared performance, but networking is the default). So it doesn't matter at all where the application runs as long as the nodes can connect to each other. I guess that you want to test the application on your laptop, so the nodes are all running locally and can easily connect to each other via the loopback network.
I am not quite sure if I do understand your question, but a laptop is a computer just like any other. Providing you have set up your MPI libs correctly and set your paths, you can, of course, use MPI routines on your laptop.
As far as I am concerned, I use Debian Linux (http://www.debian.org) for all my parallel stuff. I have written a little article dealing with HowTo get MPI run on debian machines. You may want to refer to it.