embedded linux, application state freeze, relaunch - c

We have an embedded application, now it requires its state to be saved and reloaded. Just like in PC games, where you save it before you have to go out and breath some fresh air.The product is quiet evolutionary in nature, no proper design so identifying data to be saved is not an option.
The software is in C so all data has fixed addresses (.data segment), its also deterministic,a and no dynamic memory allocations. So theoretically I take a back up of this data segment in a file and on relaunch of application update it back from the file. This approach will probably save a lot more data than what is required, but I am ok with it.
How can I do this in short execution time ?
Also how can I identify the start and end of .data segment in run-time ?

You want application checkpointing, so perhaps the Berkley Lab Checkpoint Restart library might help you.
You could perhaps use the mmap(2) system call, if you are sure all the data has fixed addresses, etc...
To know about your current memory segments and mappings, read (from your application) the /proc/self/maps file. There is also /proc/self/smaps etc. Learn more about proc(5), ie /proc/

Related

Is write system call to a file reduce the memory write cycle how to handle

I have to read/write a single file multiple times in a day. say may be every second, I have to update the file.
how does this effect the memory(I am using Emmc flash).
since emmc having the defined write cycle over which it will corrupt, please suggest me a best way to handle this.
how about using the mmap and msync, is there any possibility to avoid writes..?
if I am using a mmap and writing frequently is it will also write in flash every time I write into the shared mapped memory.?
It could depend upon the computer, the file system, the mount options, the kernel (and version).
Perhaps (and actually probably) your file is sitting in the page cache and write(2)-ing it often does not update the flash memory every time.
You could use something else that a plain file (e.g. a database, or use sqlite) or write your own data daemon to avoid that. If you need that to stay in a file, perhaps consider writing your own FUSE
We cannot help more without actual code and much more details in your question.

shared memory proper cleanup in linux C

I am developing a shared memory suite of applications, in C, on Ubuntu 12.04 server.
This is my first attempt at dealing with shared memory, so I am not sure of all the pitfalls.
My suite starts with a daemon that creates a shared memory segment, and a few other apps that use it.
During development, of course, the segment can get created, then the program crash without removing it. I can see it listed using ipcs -m.
Normally, this isn't much of an issue, as the next time I start the "controlling" program, it simply reuses the existing segment.
However, from time to time, the size of the segment changes, so the program fails the shmget() function.
My question...
Before I create and attach the segment, I would like to first determine if it already exists, then if needed, unattach it, then I think, shmctl(..., ipc_rmd, ...).
Can I have some help to figure the proper sequence of events to do this?
I think I should use shmctl to see what's up, (number of attachments, if it exists, etc..) but I don't really know.
It may be there is another app still using the segment, of course that app will have to first be stopped, and any other possible exceptions handled.
I was thinking, in the shared memory segment, I could copy the system clock, and when the app detects the clock of the shared mem differs more then a few seconds, then detach or something.
Thanks, Mark.

Create an arbitrary FAT filesystem, to file or directly in program memory

I am writing an embedded system, where I am creating a USB mass storage device driver that uses an 8MB chunk of RAM as the FAT fileystem..
Although it made sense at the time to allow the OS to take my zero'd out RAM area and format a FAT partition itself, I ran into problems that I am diagnosing on Jan Axelson's (popular author of books on USB) PORTS forum. The problem is related to erasing the blocks in the drive, and may be circumvented by pre-formatting the memory area to a FAT filesystem before USB enumeration. Then, I could see if the drive will operate normally. Ideally, the preformatted partition would include a file on it to test the read operation.
It would be nice if I could somehow create and mount a mock 8MB FAT filesystem on my OS (OSX), write a file to it, and export it to an image file for inclusion in my project. Does someone know how to do this? I could handle the rest. I'm not too concerned whether that would be FAT12/16/32 at the moment, optional MBR inclusion would be nice..
If that option doesn't exist, I'm looking to use a pre-written utility to create a FAT img file that I could include into my project and upload directly to RAM. this utility would allow me to specify an 8MB filesystem with 512-byte sectors, for instance, and possibly FAT12 / FAT16 / FAT32.
Is anyone aware of such a utility? I wasn't able to find one.
If not, can someone recommend a first step to take in implementing this in C? I'm hoping a library exists. I'm pretty exhausted after implementing the mass storage driver from scratch, but I understand I might have to 'get crinkled' and manually create the FAT partition. It's not too hard. I imagine some packed structs and some options. I'll get there. I already have resources on FAT filesystem itself.
I ended up discovering that FatFS has facilities for formatting and partitioning the "drive" from within the embedded device, and it relieved of me of having to absolutely format it manually or use host-side tools.
I would like to cover in more detail the steps taken, but I am exhausted. I may edit in further details at a later time.
There are several, they're normally hidden in the OS source.
On BSD (ie OS-X) you should have a "mkdosfs" tool, if not the source will be available all over the place ... here's a random example
http://www.blancco.com/downloads/source/utils/mkdosfs/mkdosfs.c
Also there's the 'mtools' package, it's normally use for floppies, but I think it does disk images too.
Neither of these will create partition tables though; you'd need something else if that's required too.

Securely remove file from ext3 linux

This question has been asked with varying degrees of success in the past...
Are there tools, or C/C++ unix functions to call that would enable me to retrieve the location on disk of a file? Not some virtual address of the file, but the disk/sector/block the file resides in?
The goal here is to enable overwriting of the actual bits that exist on disk. I would probably need a way to bypass the kernel's superimposition of addresses. I am willing to consider an x86 asm based solution...
However, I feel there are tools that do this quite well already.
Thanks for any input on this.
Removing files securely is only possible under very specific circumstances:
There are no uncontrolled layers of indirection between the OS and the actual storage medium.
On modern systems that can no longer be assumed. SSD drives with firmware wear-leveling code do not work like this; they may move or copy data at will with no logging or possibility of outside control. Even magnetic disk drives will routinely leave existing data in sectors that have been remapped after a failure. Hybrid drives do both...
The ATA specification does support a SECURE ERASE command which erases a whole drive, but I do not know how thorough the existing implementations are.
The filesystem driver has a stable and unique mapping of files to physical blocks at all times.
I believe that ext2fs does have this feature. I also think that ext3fs and ext4fs also work like this in the default journaling mode, but not when mounted with the data=journal option which allows for file data to be stored in the journal, rather than just metadata.
On the other hand reiserfs definitely works differently, since it stores small amounts of data along with the metadata, unless mounted with the notail option.
If these two conditions are met, then a program such as shred may be able to securely remove the content of a file by overwriting its content multiple times.
This method still does not take into account:
Backups
Virtualized storage
Left over data in the swap space
...
Bottom line:
You can no longer assume that secure deletion is possible. Better assume that it is impossible and use encryption; you should probably be using it anyway if you are handling sensitive data.
There is a reason that protocols regarding sensitive data mandate the physical destruction of the storage medium. There are companies that actually demagnetize their hard disk drives and then shred them before incinerating the remains...

Are there operating systems that aren't based off of or don't use a file/directory system?

It seems like there isn't anything inherent in an operating system that would necessarily require that sort of abstraction/metaphor.
If so, what are they? Are they still used anywhere? I'd be especially interested in knowing about examples that can be run/experimented with on a standard desktop computer.
Examples are Persistent Haskell, Squeak Smalltalk, and KeyKOS and its descendants.
It seems like there isn't anything inherent in an operating system
that would necessarily require that sort of abstraction/metaphor.
There isn't any necessity, it's completely bogus. In fact, forcing everything to be accessible via a human readable name is fundamentally flawed, and precludes security due to Zooko's triangle.
Examples of hierarchies similar to this appear as well in DNS, URLs, programming language module systems (Python and Java are two good examples), and torrents, X.509 PKI.
One system that fixes some of the problems caused by DNS/URLs/X.509 PKI is Waterken's YURL.
All these systems exhibit ridiculous problems because the system is designed around some fancy hierarchy instead of for something that actually matters.
I've been planning on writing some blogs explaining why these types of systems are bad, I'll update with links to them when I get around to it.
I found this http://pages.stern.nyu.edu/~marriaga/papers/beyond-the-hfs.pdf but it's from 2003. Is something like that what you are looking for?
About 1995, I started to design an object oriented operating system
(SOOOS) that has no file system.
Almost everything is an object that exists in virtual memory
which is mapped/paged directly to the disk
(either local or networked, I.e. redudimentary cloud computing).
There is a lot of overhead in programs to read and write data in specific formats.
Image never reading and writing files.
In SOOOS there are no such things as files and directories,
Autonomous objects, which would essentially replace files, can be organized
suiting your needs, not simply a restrictive hierarchical file system.
There are no low level drive format structures (I.e. clusters)
that have additional level of abstraction and translation overhead.
SOOOS Data storage overhead is simply limited to page tables
that can be quickly indexed as with basic virtual memory paging.
Autonomous objects each have their own dynamic
virtual memory space which serves as the persistent data store.
When active they are given a task context and added to the active process task list
and then exist as processes.
A lot of complexity is eliminated in my design, simply instanciate objects
in a program and let the memory manager and virtual memory system handle
everything consistently with minimal overhead.
Booting the operating system is simply a matter of loading the basic kernal
setting up the virtual memory page tables to the key OS objects and
(re)starting the OS object tasks. When the computer is turned-off,
shutdown is essentially analogous to hibernation
so the OS is nearly in instant-on status,
The parts (pages) of data and code are loaded only as needed.
For example to edit a document, instead of starting a program by loading the entire
executable in memory, simply load the task control structure of the
autonomous object and set the instruction pointer to the function to be performed.
The code is paged in only as the instruction pointer traverses its virtual memory.
Data is always immediately ready to be used and simply paged in only as accessed
with no need to parse files and manage data structures which often
have a distict represention in memory from secondary storage.
Simply use the program's native memory allocation mechanism and
abstract data types without disparate and/or redundent data structures.
Object Linking and Embedding type of program interaction,
memory mapped IO, and interprocess communication you
get practically for free as one would implement
memory sharing using the facilities of the processor's Memory Management Unit.

Resources