Where filesystems store their file metadata - file

So I know that a file is composed by it's data and also metadata, which is information about it (usually the name, the type of the file, dates of creation and modification, etc.).
My question is where exactly is that information stored. I know it can be included inside the file, the directory or in a database, but for the Windows, Linux and MAC-OS file systems I can't seem to find this information...

Most of this information in the case of Windows and Mac is proprietary.
For Windows I can say for certain that a close enough version of the NTFS file system driver has been written for Linux. You can look into that, there's also some documentation most of it written by Richard 'Flatcap' Russon (http://www.flatcap.org/ntfs/).
Documentation on the FAT Filesystem has been made public a long time ago with the intent that it would provide ample information for developers and engineers working on flash drives and things like that. (http://msdn.microsoft.com/en-us/library/windows/hardware/gg463080.aspx)
Documentation on the Ext Filesystem used by Linux distributions can be found on the web easily. (Ext2 : http://www.nongnu.org/ext2-doc/ext2.html)
I have no clue what Mac uses but I bet it's some kind of proprietary abomination derived from an existing format (probably ext). This is just my opinion, do not take this as fact.
All these formats have some sort of structure that holds meta-data. The file is just a stream of bytes somewhere on the physical drive. Most filesystems should have a structure that stores at least the file's location (usually a starting cluster for each fragment of the file) and the file's size. The rest of meta-data is up to each file system to implement.
For example, the FAT Filesystem there are tables for each directory and each directory stores metadata about the files it contains. But it also has a FAT table that holds the fragment locations for each file contained in the filesystem.
The NTFS Filesystem has a big table called the Master File Table that holds a record of metadata for each file contained by the filesystem including the table itself. Each record holds all the metadata including the file location on the physical drive for each fragment.
However, the directory structure is held as data in directory file records. However, the NTFS has even more structures that hold information about the files such as the USN Journal or the Volume Bitmap.
To access the meta-data contained by the file system you either have to parse the raw volume or to use functions exposed by the operating system's API. The API doesn't generally give you all the information you want about the metadata. For example, Windows API will give you functions to iterate through the USN Journal to find information about a particular file, but you can't get the MFT attributes of a file directly.
Again, I have to stress out that even with most of the documentation on these proprietary file systems, you're taking shots in the dark since it's their intellectual property. Some if not most of the documentation that we have now comes from reverse engineering.

It depends on the filesystem. Take a look at http://lxr.free-electrons.com/source/fs/fat/fat.h for instance.

systems#Metadata show details on which file systems store information in metadata.

Related

Scan a USB for Folders which have mp3 file in them using ELM CHAN fatfs?

I am tying to scan usb msc in stm32 for audio files. This mp3 files are scattered in many folders which are unknown to the application.
First I scan for directories in the root folder and find folders then I scan folders for mp3 files.
This is very time consuming and for depth of 8 folders with many files in each folder.
Is there any Way to Scan for just for folders which have mp3 file in them using a better approach.
Directory structure for testing is something like this:
It is not clear what your problem might be since you have not provided any code to see how you are scanning, or quantative information on the file/folder structure ("many files" is rather vague), or even specified the media type used.
However a solution that might overcome all the variables of filesystem performance, hardware I/O, driver implementation and media type and make access more deterministic regardless, is to maintain a separate index file or database in a single file in the root directory to map each MP3 file to its path so you need only search the index/database for the MP3 you need (or use it to directly list all MP3's without scanning the file system).
If you maintain that file sorted (or a separate index file that is sorted) then you can use a binary search to find a specific file. Or simply use a real database - though that might be a rather heavyweight solution for this purpose. You might even load the metadata into memory for even faster access, and write it to the filesystem only if it changes.
Either way, the solution I suggest is to isolate your application from the variability of the filesystem/media, and the lack of scalability of FAT in general by maintaining your own "metadata" file(s) indicating what is stored and where so that you can use that to access the files directly without file system scanning using findfirst/findnext semantics, and recursion which is always best avoided, but is the obvious way to scan a directory tree.
Incidentally this is precisely how iTunes works for example. The "iTunes Library.xml" contains meta-data about "songs" including their location. Clearly you need not have anything quite that detailed, but the principle is the same and there may be merit in using XML or JSON for your application given a suitable library for updating and accessing such a file.
By doing that, the performance is more directly within your control rather than dependent on the filesystem, media and/or device driver level. However you still have some control/responsibility over the media and its interface (SPI, SDIO, USB or whatever), and the device I/O layer (DMA, interrupts, polled, bit-banged), and while you may have little control over the choice of FAT and the ELM FatFs implementation, you can certainly impact its performance greatly at the device driver, hardware interface and physical media level.

Can I use file descriptors to create a custom filesystem?

In a course I am enrolled in, I was tasked to create a filesystem with some custom features. I simply created an image of zeroes using dd, and created my filesystem by creating superblock, inodes, stat files, etc. for it. It can read/write files, import and export files and directories, with proper directory hierarchy.
Now I want to make this work with an actual physical partition. I looked at many places, and saw that file descriptors can be read as plain files. But I want to know if it relies on existing filesystem in the partition. Can I bypass everything and simply get a block-wise read/write interface, and with ability to seek to bytes or blocks? What would be the overhead on that?
Also, I want to turn it into a linux module so that my filesystem can work with file managers. What is the standard API interface that I need to implement to make it happen?
Please guide me to the right direction.
You have a lot of options; integration into the kernel is relatively hard, whereas integrating with a user space file system framework (libfuse on github) is a good intermediate step. At the end, you will have a usable file system.

How are access times, modified times, encodings and filenames stored in files on NTFS and EXT3/4?

For academic and task related purposes I need to know how is file related data associated within files on NTFS and EXT. How does the operating system know file's name? How do editors know in which encoding to treat the file contents?
Are these details stored on a separate information location on the NTFS/EXT or are they included within the file itself?
On NTFS such information is stored not in the file itself but in the master file table (MFT).
You are asking many questions. I suggest you read up on the subject. Here is the short version, and here is everything in full detail.

Analyze VMDK (vmware virtual machine disk) files for changes

Is there a good way to analyze VMware delta VMDK files between snapshots to list changed blocks, so one can use a tool to tell which NTFS files are changed?
I do not know a tool that does this out of the block, but it should not be so difficult.
The VMDK file format specification is available and the format is not that complex. As far as I remember, a VMDK file consists of a lot of 64k block. At the beginning of the VMDK file there is some directory that contains the information where a logical block is stored in the physical file.
It should be pretty easy to detect there a logical block is stored in both files and than compare the data in the two version of the VMDK file.

Unknown database, how to access?

I am examining a Windows application that uses a database of unknown type. The database consists of several files with file extensions, .i, .iz, .b1, .p and .bi. Is there an API that can be used to view the design, tables and contents of this database? The ambition is to migrate the data to a MySQL environment.
Use a hex editor and see db inside in binary mode. You may get the chance to see the file type in the few starting bytes. Then change the extension appropriately and open it.
Perhaps the Unix file utility (available in Cygwin) can identify them.
From FileExt.com:
File Extension BI
File type: Binary File
Primary association: Binary File
Other applications associated with file type BI:
Progress (Database Before Image File) by Progress Software Corporation
Quick Basic or Visual Basic for DOS (Include File) by Microsoft Corporation Similar to C's .H but is used only in Microsoft's DOS BASIC dialects. Stands for "Basic Include". This association is classified as Text.
Anyway...
Chances are that it's not a relational database system that this program uses; most ad-hoc, single-use databases developed for use in one program are what are called "flat-file databases", which means that "records" have a set size and are accessed through a method of seek-ing through it as you would a normal file. For instance, if you set the record size to 20, then the first record would be at the byte range 0-19, the second would be at 20-39, etc.
If you could somehow derive what the record size this particular program uses, you could split the file into the component records as binary data. Decoding that data into meaningful information would probably be a hassle, though.

Resources