How to get a unique file identity cross-platform and filesystems in Go? - file

I'm looking for a way to get a file's inode on Linux and a file identity on Windows, or any other unique file identity as cross-platform as possible and more efficient than hashing file contents. Platforms are Linux,Windows,Mac.
The purpose of this is to be able track files as efficiently as possible so the program can locate them after the user has renamed or moved them. If I have a file path + file ID and the file is not at the path, I need a way to find the new file path on all mounted volumes (or fail if it has been deleted). Hashing all files is prohibitively expensive and creates a whole bunch of new error possibilities since files are going to be updated and renamed regularly.
I'm hoping there is an existing library but haven't found anything cross-platform yet. If there is none, I'd be very grateful for help on how to use syscalls in x/sys, and to hear about any limitations inodes and file identities have.

Related

Scan a USB for Folders which have mp3 file in them using ELM CHAN fatfs?

I am tying to scan usb msc in stm32 for audio files. This mp3 files are scattered in many folders which are unknown to the application.
First I scan for directories in the root folder and find folders then I scan folders for mp3 files.
This is very time consuming and for depth of 8 folders with many files in each folder.
Is there any Way to Scan for just for folders which have mp3 file in them using a better approach.
Directory structure for testing is something like this:
It is not clear what your problem might be since you have not provided any code to see how you are scanning, or quantative information on the file/folder structure ("many files" is rather vague), or even specified the media type used.
However a solution that might overcome all the variables of filesystem performance, hardware I/O, driver implementation and media type and make access more deterministic regardless, is to maintain a separate index file or database in a single file in the root directory to map each MP3 file to its path so you need only search the index/database for the MP3 you need (or use it to directly list all MP3's without scanning the file system).
If you maintain that file sorted (or a separate index file that is sorted) then you can use a binary search to find a specific file. Or simply use a real database - though that might be a rather heavyweight solution for this purpose. You might even load the metadata into memory for even faster access, and write it to the filesystem only if it changes.
Either way, the solution I suggest is to isolate your application from the variability of the filesystem/media, and the lack of scalability of FAT in general by maintaining your own "metadata" file(s) indicating what is stored and where so that you can use that to access the files directly without file system scanning using findfirst/findnext semantics, and recursion which is always best avoided, but is the obvious way to scan a directory tree.
Incidentally this is precisely how iTunes works for example. The "iTunes Library.xml" contains meta-data about "songs" including their location. Clearly you need not have anything quite that detailed, but the principle is the same and there may be merit in using XML or JSON for your application given a suitable library for updating and accessing such a file.
By doing that, the performance is more directly within your control rather than dependent on the filesystem, media and/or device driver level. However you still have some control/responsibility over the media and its interface (SPI, SDIO, USB or whatever), and the device I/O layer (DMA, interrupts, polled, bit-banged), and while you may have little control over the choice of FAT and the ELM FatFs implementation, you can certainly impact its performance greatly at the device driver, hardware interface and physical media level.

Can I use file descriptors to create a custom filesystem?

In a course I am enrolled in, I was tasked to create a filesystem with some custom features. I simply created an image of zeroes using dd, and created my filesystem by creating superblock, inodes, stat files, etc. for it. It can read/write files, import and export files and directories, with proper directory hierarchy.
Now I want to make this work with an actual physical partition. I looked at many places, and saw that file descriptors can be read as plain files. But I want to know if it relies on existing filesystem in the partition. Can I bypass everything and simply get a block-wise read/write interface, and with ability to seek to bytes or blocks? What would be the overhead on that?
Also, I want to turn it into a linux module so that my filesystem can work with file managers. What is the standard API interface that I need to implement to make it happen?
Please guide me to the right direction.
You have a lot of options; integration into the kernel is relatively hard, whereas integrating with a user space file system framework (libfuse on github) is a good intermediate step. At the end, you will have a usable file system.

Where filesystems store their file metadata

So I know that a file is composed by it's data and also metadata, which is information about it (usually the name, the type of the file, dates of creation and modification, etc.).
My question is where exactly is that information stored. I know it can be included inside the file, the directory or in a database, but for the Windows, Linux and MAC-OS file systems I can't seem to find this information...
Most of this information in the case of Windows and Mac is proprietary.
For Windows I can say for certain that a close enough version of the NTFS file system driver has been written for Linux. You can look into that, there's also some documentation most of it written by Richard 'Flatcap' Russon (http://www.flatcap.org/ntfs/).
Documentation on the FAT Filesystem has been made public a long time ago with the intent that it would provide ample information for developers and engineers working on flash drives and things like that. (http://msdn.microsoft.com/en-us/library/windows/hardware/gg463080.aspx)
Documentation on the Ext Filesystem used by Linux distributions can be found on the web easily. (Ext2 : http://www.nongnu.org/ext2-doc/ext2.html)
I have no clue what Mac uses but I bet it's some kind of proprietary abomination derived from an existing format (probably ext). This is just my opinion, do not take this as fact.
All these formats have some sort of structure that holds meta-data. The file is just a stream of bytes somewhere on the physical drive. Most filesystems should have a structure that stores at least the file's location (usually a starting cluster for each fragment of the file) and the file's size. The rest of meta-data is up to each file system to implement.
For example, the FAT Filesystem there are tables for each directory and each directory stores metadata about the files it contains. But it also has a FAT table that holds the fragment locations for each file contained in the filesystem.
The NTFS Filesystem has a big table called the Master File Table that holds a record of metadata for each file contained by the filesystem including the table itself. Each record holds all the metadata including the file location on the physical drive for each fragment.
However, the directory structure is held as data in directory file records. However, the NTFS has even more structures that hold information about the files such as the USN Journal or the Volume Bitmap.
To access the meta-data contained by the file system you either have to parse the raw volume or to use functions exposed by the operating system's API. The API doesn't generally give you all the information you want about the metadata. For example, Windows API will give you functions to iterate through the USN Journal to find information about a particular file, but you can't get the MFT attributes of a file directly.
Again, I have to stress out that even with most of the documentation on these proprietary file systems, you're taking shots in the dark since it's their intellectual property. Some if not most of the documentation that we have now comes from reverse engineering.
It depends on the filesystem. Take a look at http://lxr.free-electrons.com/source/fs/fat/fat.h for instance.
systems#Metadata show details on which file systems store information in metadata.

Is there a performance drop when we open a file in a directory that has huge numbers of files?

Suppose we want to open a file in a directory, but there are huge numbers of files. When I requested the program to open a file in there, how fast can it search for this particular file? Will there be performance drop for looking for the requested file in this case?
PS. This should also depend on the file systems implementation, yes?
Yes, it depends a lot on the file system implementation.
Some file systems have specific optimizations for large directories. One example I can think of is is ext3, which uses HTree indexing for large directories.
Generally speaking there will usually be some delay to find the file. Once the file is located/opened, however, reading it should not be slower than reading any other file.
Some programs that need to handle a large amount of files (for caching, for example) put them in a large directory tree, to reduce the number of entries per directory.

How to implement a file type based filesystem?

I want to essentially make it so that you never need to unzip/unrar any files. Currently, I have a Dokan filesystem which can do it given a specific zip file but I wanted to know how I can make it apply to all files. Meaning, I want to be able to compile a program that has "fopen("test.zip/1.jpg", "rb");". I think that a Shell Extension would work for dynamically loading the file into the filesystem IF I were browsing in the shell explorer but that doesnt help me with the fopen example. Any ideas?
What you want to do can be used with help of file system filter driver, which would track directory enumeration requests and report directories in place of ZIP files. Then this driver would create virtual files and take the data from ZIP archives. Quite a lot of kernel-mode work, I should say. And file system filter driver is not a file system driver, so dokan won't help you at all.

Resources