For a university assignment, we have to modify the ext2 file system to store files in the inode's block pointers if it's smaller than 60 bytes, and move to regular block storage once the file grows larger than that.
I have, what might admittedly be a silly question, but I was wondering if anyone with experience working in the ext2fs might be able to tell me whether the inode structure itself would have to be modified to accomplish this task?
And would modification of the inode, if it were required, impede the general running of the ext2 system?
for better understanding of any filesystem in Linux I recommend 'Linux kernel development' 3rd edition by Robert Love. (The Virtual Filesystem part)
after that you can read GNU documentations about ext2 filesystem.
and then start reading e2fsprogs. this is the tool for creating ext filesystems. if you want to modify filesystem structure, you need to build modified filesystem first. finally read the actual source code of kernel's ext2 driver.
keep in mind that there is no short way to do this. you should understand Linux VFS completely.
and one other thing.. when reading source files keep in mind that the most important parts of the code are data structures that act like objects.
use source code reading tools listed by GNU like cscope.
and yes! modification of inode structure CAN cause many sorts of problems.
good luck :)
Related
I am writing an embedded system, where I am creating a USB mass storage device driver that uses an 8MB chunk of RAM as the FAT fileystem..
Although it made sense at the time to allow the OS to take my zero'd out RAM area and format a FAT partition itself, I ran into problems that I am diagnosing on Jan Axelson's (popular author of books on USB) PORTS forum. The problem is related to erasing the blocks in the drive, and may be circumvented by pre-formatting the memory area to a FAT filesystem before USB enumeration. Then, I could see if the drive will operate normally. Ideally, the preformatted partition would include a file on it to test the read operation.
It would be nice if I could somehow create and mount a mock 8MB FAT filesystem on my OS (OSX), write a file to it, and export it to an image file for inclusion in my project. Does someone know how to do this? I could handle the rest. I'm not too concerned whether that would be FAT12/16/32 at the moment, optional MBR inclusion would be nice..
If that option doesn't exist, I'm looking to use a pre-written utility to create a FAT img file that I could include into my project and upload directly to RAM. this utility would allow me to specify an 8MB filesystem with 512-byte sectors, for instance, and possibly FAT12 / FAT16 / FAT32.
Is anyone aware of such a utility? I wasn't able to find one.
If not, can someone recommend a first step to take in implementing this in C? I'm hoping a library exists. I'm pretty exhausted after implementing the mass storage driver from scratch, but I understand I might have to 'get crinkled' and manually create the FAT partition. It's not too hard. I imagine some packed structs and some options. I'll get there. I already have resources on FAT filesystem itself.
I ended up discovering that FatFS has facilities for formatting and partitioning the "drive" from within the embedded device, and it relieved of me of having to absolutely format it manually or use host-side tools.
I would like to cover in more detail the steps taken, but I am exhausted. I may edit in further details at a later time.
There are several, they're normally hidden in the OS source.
On BSD (ie OS-X) you should have a "mkdosfs" tool, if not the source will be available all over the place ... here's a random example
http://www.blancco.com/downloads/source/utils/mkdosfs/mkdosfs.c
Also there's the 'mtools' package, it's normally use for floppies, but I think it does disk images too.
Neither of these will create partition tables though; you'd need something else if that's required too.
I'm working on a personal project to regularly (monthly-ish) traverse my hard disk and shred (overwrite with zeros) any blocks on the disk not currently allocated to any inode(s).
C seemed like the most logical language to do this in given the low-level nature of the project, but I am not sure how best to find the unused blocks in the filesystem. I've found some questions around S.O. and other places that are similar to this, but did not see any consensus on the best way to efficiently and effectively find these unused blocks.
df has come up in any questions even remotely similar to this, but I don't believe it has the resolution necessary to specify exact block offsets unless I am missing something. Is there another utility I should look into or some other direction entirely?
Whatever solution I develop would need to be able to handle, at minimum, ext3 filesystems, and preferably ext4 also.
You don't really have any general solution to find out which blocks are in use other than writing your own implementation to read and parse the on disk filesystem data which is highly specific to the filesystems you want to support. How the data looks on disk is something that is often undocumented outside of the code for that filesystem and when documented the documentation is often out of date compared to the actual implementation.
Your best bet is to read the implementation of fsck for the filesystem you want to support since it does more or less what you're interested in, but be warned that many fsck implementations out there don't always check all of the data that belongs to the filesystem. You might have alternate superblocks and certain metadata that fsck doesn't check (or only checks in case the primary superblock is corrupted).
If you really want to do what you say you want to do and not just learn about filesystems your best bet is to dump your filesystem like a normal backup, wipe the disk and restore the backup. I highly doubt anything else is safe to do especially considering that your disk wiping application might break your filesystem with any kernel update you do.
Linux currently support over a dozen different filesystems so the answer will depend on which one you choose.
However, they should all have an easy way of finding free blocks otherwise creating new files or extending current files would be a bit slow.
For example, ext2 has, at the start of each block group, a header containing, among other things, the free list for that block group. I don't believe this has changed in ext4 even though there's a lot of extra stuff in there.
You would probably be far better traversing the free blocks in those block headers rather than taking an arbitrary block and trying to figure out if it's used or free.
I want to get information about the battery in C on linux. I don't want to read or parse any file! Is there any low-level interface to acpi/the kernel or any other module to get the information I want to have?
I already searched the web, but every question results in the answer "parse /proc/foo/bar". I really don't want to do this because I think, low-level interfaces won't change as fast as Files do.
best regards.
The /proc filesystem does not exist on a disk. Instead, the kernel creates it in memory. They are generated on-demand by the kernel when accessed. As such, your concerns are invalid -- the /proc files will change as quickly as the kernel becomes aware of changes.
Check this for more info about /proc file system.
In any case, I don't believe there's any alternative interface.
You might be looking for UPower: http://upower.freedesktop.org/
This is a common need for both desktop environments and mobile devices, so there have been many solutions over time. For example, one of the oldest ones was acpid, which is pretty much obsolete now.
While I'd recommend using a light-weight abstraction like UPower for code clarity reasons, the files in /proc and (to some extent) /sys are considered part of the Linux kernel ABI, which means that changing them is generally frowned upon.
Can anybody point me to a simple (can't stress this enough) implementation of an in memory file system? If I can create a file and do a simple cat file.txt it's more than enough.
I would like to use it as part of my toy OS.
The OSDev wiki site might help you. You can also ask your questions there and if you look at wiki and their forum you will pretty likely get your answer.
In my opinion, in-memory file systems should be as basic as possible. Here is a virtual file system implemented in user-mode from Windows, but its design principals can be used in your own OS. http://www.flipcode.com/archives/Programming_a_Virtual_File_System-Part_I.shtml . Even this might be too much for your basic OS. I say just wing it and create a linked list of file descriptors that include only file attributes, file name, and file path with each file.
This belongs on Super User IMHO, but anyway, you might want to look at ImDisk. It should be more than enough for just creating a RAM disk.
Wait, I misread... this is for your "toy OS"? Your toy OS supports file systems? You're going to have to implement it yourself, since there's no way something preexisting will work with your home-made OS.
I am wondering how the OS is reading/writing to the hard drive.
I would like as an exercise to implement a simple filesystem with no directories that can read and write files.
Where do I start?
Will C/C++ do the trick or do I have to go with a more low level approach?
Is it too much for one person to handle?
Take a look at FUSE: http://fuse.sourceforge.net/
This will allow you to write a filesystem without having to actually write a device driver. From there, I'd start with a single file. Basically create a file that's (for example) 100MB in length, then write your routines to read and write from that file.
Once you're happy with the results, then you can look into writing a device driver, and making your driver run against a physical disk.
The nice thing is you can use almost any language with FUSE, not just C/C++.
I found it quite easy to understand a simple filesystem while using the fat filesystem on the avr microcontroller.
http://elm-chan.org/fsw/ff/00index_e.html
Take look at the code you will figure out how fat works.
For learning the ideas of a file system it's not really necessary to use a disk i think. Just create an array of 512 byte byte-arrays. Just imagine this a your Harddisk an start to experiment a bit.
Also you may want to hava a look at some of the standard OS textbooks like http://codex.cs.yale.edu/avi/os-book/OS8/os8c/index.html
The answer to your first question, is that besides Fuse as someone else told you, you can also use Dokan that does the same for Windows, and from there is just a question of doing Reads and Writes to a physical partition (http://msdn.microsoft.com/en-us/library/aa363858%28v=vs.85%29.aspx (read particularly the section on Physical Disks and Volumes)).
Of course that in Linux or Unix besides using something like Fuse you only have to issue, a read or write call to the wanted device in /dev/xxx (if you are root), and in these terms the Unices are more friendly or more insecure depending on your point of view.
From there try to implement a simple filesystem like Fat, or something more exoteric like an tar filesystem, or even some simple filesystem based on Unix concepts like UFS or Minux, or just something that only logs the calls that are made and their arguments to a log file (and this will help you understand, the calls that are made to the filesystem driver during the regular use of your computer).
Now your second question (that is much more simple to answer), yes C/C++ will do the trick, since they are the lingua franca of system development, also a lot of your example code will be in C/C++ so you will at least read C/C++ in your development.
Now for your third question, yes, this is doable by one person, for example the ext filesystem (widely known in Linux world by it's successors as ext2 or ext3) was made by a single developer Theodore Ts'o, so don't think that these things aren't doable by a single person.
Now the final notes, remember that a real filesystem interacts with a lot of other subsystems in a regular kernel, for example, if you have a laptop and hibernate it the filesystem has to flush all changes made to the open files, if you have a pagefile on the partition or even if the pagefile has it's own filesystem, that will affect your filesystem, particularly the block sizes, since they will tend to be equal or powers of the page block size, because it's easy to just place a block from the filesystem on memory that by coincidence is equal to the page size (because that's just one transfer).
And also, security, since you will want to control the users and what files they read/write and that usually means that before opening a file, you will have to know what user is logged on, and what permissions he has for that file. And obviously without filesystem, users can't run any program or interact with the machine. Modern filesystem layers, also interact with the network subsystem due to the fact that there are network and distributed filesystems.
So if you want to go and learn about doing kernel filesystems, those are some of the things you will have to worry about (besides knowing a VFS interface)
P.S.: If you want to make Unix permissions work on Windows, you can use something like what MS uses for NFS on the server versions of windows (http://support.microsoft.com/kb/262965)