I am new to android kernel and Mobile Operating Systems and I have a few questions regarding android kernel.
1) Does Android OS has Kernel Mode and a User mode like Normal desktop OSs ? Also does it support things like Virtual Memory ? Also I heard about Dalvik VMM. Is a copy of dalvik VMM created for each and every process ?
2) Another Question is I heard android creates a separate file system for each and every process(every application). Is this true ? If so How OS maintains these file systems and where are they mounted. Does it have a hierarchy like Unix based systems ?
3) Another Question is regarding IPC in android. What are binders in android ? How does it differ from normal IPC mechanisms like pipes, msg queues etc.
4) Another Question not related to android but How does the driver address Flash based disks like Solid state drives etc. For ex: normal HDD block can be identified by cylinder, sector and a track.
1. The "Android kernel" is the Linux kernel.
1a. No, you didn't hear about the "Dalvik VMM", you heard about the "Dalvik VM", which is simply a new kind of Java VM. It runs Java apps. No magic. No, there isn't somehow a Dalvik VM associated with "every process", but yes, each application runs in an independent process.
2. No. There's a directory structure, not distinct filesystems.
3. Why Binder?
4. Android uses the usual Linux MTD and MTD-Block devices. And the world is LBA, whether for flash or hard drives. CHS is only for those time-travelling thirty years to the past.
Does the Android kernel have a kernel space and user space?
The kernel used on Android powered devices is a 2.6 kernel providing the core system services like memory management, process management, network stack, and driver model.
So yes it does have a kernel and user space. You have the regular /proc file system for kernel/user space communication for example.
It is true that every application runs in its own process with its own instance of DVM.
You can read more about it on the What is Android? page.
How does the application file structure look like on Android?
Yes every applications has its own directory structure for application data like databases, shared preferences and other application specific files which looks like this.
/data/data/packagename
Other than that the actual .apk files are located in
/data/app
I'm not quite sure about your question if it is a UNIX based hierarchy system. I guess you want to know if applications will be placed in /usr/bin/ and so on. Then no. Except you write some binaries yourself and build your custom image then you should definitely place your system binaries in the default FSH places.
On question three. I'm not quite sure what you are referring to. If you mean the UNIX IPC then well it's a 2.6 kernel with all it's core functionalities like states above. If you are referring to Remote procedure calls of the APIs then you might take a look at Remote procedure calls.
Question four is beyond my knowledge or I didn't get your question.
Generally I'd recommend you some very interesting reads.
What is Android? like stated above.
Android Application Fundamentals
Android Sources page
Hope it helped somehow.
1) If you want permissions for various operations you need to enable them in the manifest.
2) Yes. Each application has its own file-system, but the files are accessed by file name only (no path). If you want to use external memory such as an SD card, you need to enable permission in the manifest, and use a fully qualified path/file-name.
3) I am not familiar with android binders (though I see them while debugging), but passing messages between tasks is very straightforward.
4) Flash based memory blocks are identified by address. Flash is not RAM, but it is random access.
Related
This question doesn't sound trivial, but I'll give it a try.
What I need to do, is to create a virtual filesystem and mount it as if it was an hard drive. When my application starts, a new hard drive should appear in the list of available devices. Now, I need that drive to be virtual. In particular, I need to be able to generate the content in a dynamic way.
Basically, I would want to be able to:
import some magic library
register the callbacks for, like, list folder, get the content of this file at these bytes, and so on
run my program
a new drive appears, and when I do an ls, it's my program that answers that ls via a callback
Is this even possible? In principle, I should be able to simulate a drive, but I wouldn't even know where to start.
On Linux FUSE library is built into the OS. On MacOS X there's OSXFUSE which is similar to FUSE on Linux.
On Windows there's CBFS Connect by our company, which offers its own API and a FUSE compatibility layer.
Mobile platforms (iOS, Android) neither offer such functions nor give a way to implement them.
I am currently in the process of developing the OS for a consumer electronics product my company is developing. I have settled on freeRTOS as the backbone for our OS, and am working diligently to implement hardware functionality within the OS. However, I have run into an issue concerning running 3rd-party applications from within freeRTOS.
Originally I considered a task to be an application, where basically you had "myapplication.c" and "myapplication.h" containing all your applications necessary functions and the code would reside within the for(;;) loop within the task (acting as a main while loop). Then when the user decides to run that application, a function pointer is passed to a queue, that my app_launcher task then uses to create the new task using the 3rd-party task or application.
The problem with this approach however, is the OS will already be compiled and reside on the microcontroller, and applications with be installed and deleted as the user sees fit... So obviously applications need to be compiled and executable from the OS. On a standard unix machine, I would use something like fork, to select the executable and give it it's own process. However I cannot find a similar functionality within freeRTOS.. My other idea is approaching a scripting language for app development, but again I'm not sure on how to launch those applications...
So the question is, how do I get freeRTOS to run applications from 3rd party developers that aren't already baked into the OS?
FreeRTOS (and most RTOSes for that matter) do not work like general purpose operating systems (GPOS), they are not generally designed to dynamically load and execute arbitrary user supplied applications. In most case you use an RTOS because you require hard real-time response and the execution of third-party code could compromise that.
Most RTOSes (FreeRTOS included) are no more that static-link libraries, where your entire embedded application is statically linked with the RTOS and executes as a single multi-threaded program.
Again many RTOSes (like FreeRTOS) are not operating systems in the same sense as a GPOS such as Linux. Typically the RTOS services available are the real-time scheduler, inter-process communication (IPC), thread-synchronisation, and timers. Middle-ware such as a file system, and network stack for example are either optional extensions or must be integrated from third-party code.
One problem you will have with FreeRTOS trying to achieve your aim is that a "task" is analogous to a "thread" rather than a "process" in the sense of a GPOS process model. A task typically operates in the same memory space as other tasks with no memory protection between tasks. Tasks are not separate programs, but threads within a single application.
If your target has no MMU then memory protection may be limited in any case, but you may still want third-party applications to be conceptually independent from the OS. If your processor does not have an MMU, then running arbitrary third-party dynamically loaded code may be a problem for system integrity, safety and security. Even with an MMU a simple RTOS kernel such as FreeRTOS won't use it.
Operating systems with real-time scheduling that can load and run application code dynamically as separate processes include:
Windows Embedded Compact (formerly Windows CE)
QNX Neutrino
OS-9
Also VxWorks has the ability to load partially linked object code and dynamically link it to the already loaded code. This is not the same at a process model, but is more akin to a dynamic-link library. What makes it worth mentioning in this context is that the VxWorks shell can invoke any function with external linkage by name. So you can load an object file implementing a function and then run that function. You could in principle implement the same functionality on FreeRTOS, but it is non trivial. The shell is one thing, but dynamic loading and linking requires the application symbol table to be target resident.
If you don't need hard real-time (or your real-time requirements are "soft") and your target has sufficient resources, you may be better served deploying Linux or uClinux which are increasingly used in embedded systems.
If the code your end-users need to run are tightly related to the purpose of your device rather than "general purpose" in nature then another possibility for allowing end-users to run code is to integrate a scripting language interpreter such as Lua. In this case you would simply load the script from a file system and pass it to the script interpreter. For more general purpose requirements a Java VM may be a possibility.
Due to request, here is the work around I found to my problem. The issue was launching other applications from freeRTOS. This was accomplished by utilizing the "System()" function in the newlib library. Thus, I can place an application in flash until it's needed, then launch it using the newlib functions provided. This also allows me to launch programs dynamically, without hard coding the code or name of the application, I just need to provide System() with a string, pointing to the app's location in memory.
I am wondering if it's possible to write an application that will access a foreign filesystem, but without needing support for that filesystem from the operating system. For example, I'd like to write an app in C that runs on Mac OS X that can browse / copy files from an ext2/ext3 formatted disk. Of course, you'd have to do all the transfers through the application (not through the system using cp or the Finder), but that would be OK for my purpose. Is this possible?
There are user space libraries that allow you to access file systems.
The Linux-NTFS library (libntfs) allows you to access NTFS file systems and there are user space programs like ntfsfix to do things to the file system.
E2fsprogs does the same for ext2, ext3 and ext4 filesystems.
As Basile mentioned, Mtools is another one that provides access to FAT partitions.
There was even a program that does exactly what you're looking for on Windows. It's called ext2explore and allows you to access ext2 partitions from Windows.
It is possible. For example the GNU mtools utility are doing that (assuming a way to access the raw device or partition) for MS-DOS FAT file systems.
However, file systems inside the kernel are usually very well tested and optimized.
Yes and No. For a regular user Application is usually not possible because access to block devices is restricted to root only. Every block device should give read/write to the needed block device for that effect. This would need at best a server/client approach where a service is started on the machine and configured to give the permissions on a per block device manner.
The somewhat easier alternative would be you to use the MacFUSE implementation.
Look here:
http://code.google.com/p/macfuse/
http://groups.google.com/group/macfuse?pli=1
The MacFuse project seems no longer mantained, but can give you a starting point for your project.
The dirty and quick approach is the following as root chmod 666 /dev/diskN
You can hijack syscalls and library calls from your application and then redirect reads/writes to anything like a KV store or a distributed DB layer (using the regular calls for the "virtual devices" that you do not support).
Then, the possibilities are boundless because you don't have to reach the physical/virtual devices when someone asks for them (resolving privilege issues).
I like programming challenges, and writing a kernel seems a programming challenge.
Unfortunately, kernels are particularly hard to test because they are basically the core of operating systems and so they can't be easily ran on top of an operating system.
However, I know about applications called Virtual Machines that can emulate computer hardware.
What is the easiest/best way to develop and test kernels(C+Assembly) using Virtual Machines?
While BOCHS seems to be better at letting you know when something goes horribly wrong with your pet OS... it is very slooooow! I use VirtualPC for general purpose testing and BOCHS when things get murky.
Also, you will more than likely be booting the OS every 2 minutes, so it helps to have some sort of automated way to build a boot image & fire off the Virtual PC.
I built a GRUB boot floppy image with all the necessary stuff to get it to boot the Kernel.Bin from the root. I use a batch file to copy this file to the virtual project directory, use FAT Image Generator to copy my kernel to the image. Then just launch the VirtualPC project. Vola!
Excerpt from my batch file:
COPY Images\Base.vfd Images\Boot.vfd /Y
fat_imgen.exe modify Images\Boot.vfd -f Source\Bin\KERNEL.BIN
COPY Images\Boot.vfd Emulators\VirtualPC\ /Y
START Emulators\VirtualPC\MyOS.vmc
One last suggestion: Set the VirtualPC process priority to low - trust me on this one!
I'd be happy to exchange some code!
Tools: DGJPP, NASM, GRUB.
Code: osdev.org, osdever.net
You might be interested in looking at HelenOS. Its a from scratch microkernel that has been ported to many architectures (boots just fine on bare metal) developed using simulators such as Simics and QEMU.
We use a static grub that is copied to the final ISO during the build process. Some things just have to be that way until the OS becomes self hosting. I highly recommend NOT implementing your own userspace C library unless you really do want to do everything from scratch .. you'll become self hosting much sooner :)
Though Simics is non-free, I highly recommend it (and its built in debugging/profiling tools) while making your kernel. Once you have some kind of kernel console and logger in place, QEMU does a very nice job.
It's straightforward. Set up a virtual machine, write your kernel, copy it to the virtual machine, boot the virtual machine.
You'll need to be more specific if you want more specific advice.
Probably just setting up a machine (x86, I guess), and then investigate exactly how it behaves during boot. There should be one or more files in the host machine's file system that act as the virtual machine's file system, and then you'd need to put some boot sector information there that causes your in-development kernel to boot.
That would of course mean that the build system on the host has a way to write the kernel to the virtual machine's file system, which might vary in difficulty.
Picking one at random, bochs seems to support editing the boot media from the outside using standard tools like dd etc.
The first question that you need to ask yourself is what hardware architecture are you targeting? I'll assume for the sake of this discussion that you are targeting the IA_32 architecture, which would probably be a wise choice as there is plenty of readily-available documentation on that processor.
If you're truly serious about this undertaking, then you will definitely want to run your debug/code/build/deploy cycle against an emulator or VM. Someone mentioned BOCHS, which is very popular. If emulation speed is your thing, there is also an emulator called Qemu that is faster than BOCHS.
I'd suggest that your development environment run under Linux or Windows, which again would probably be a wise choice due to the available documentation for those dev environments.
Make is your friend. Use it to automate the build/execute process. I'd advise you to pick your toolsets/compilers up front, and spend some time learning them well. It will save you in the long run.
We have two web servers with load balancing. We need to share some files between those servers. These would be uploaded files, session files, various files that php applications create.
We don't want to use a heavyweight, no longer maintained or a commercial solution. We're looking for some lightweight open-source software that would work as shared file system. It should be really easy to set up, must be HA available, must be very fast. It should work with RedHat Linux.
We looked at such solutions like drbd with synchronous file sharing but we can't use them because it can't work on an underlying filesystem like ext3.
OCFS may be up to snuff by now; it's worth checkout out at least. It's in the mainline linux kernel tree, http://oss.oracle.com/projects/ocfs2/ has some info on it. I've set it up before, it was pretty easy to get going.
DRBD is good for syncing over a network (direct crossover connection if at all possible), but EXT3 is not designed to be aware of changes that occur underneath it, at the block device level. For that reason you need a filesystem designed for such purposes such as the Global File System (GFS). To the best of my knowledge Red Hat has support for GFS.
The DRBD manual will give you an overview of how to use GFS with DRBD.
http://www.drbd.org/users-guide/ch-gfs.html
Don't take this as a final answer - I have not researched or used a multi-master system before, but at least this might give you something to go on.
Ideally, you would only sync the part of the data that's shared between the webservers.